Sample records for single point estimates

  1. Individual Combatant’s Weapons Firing Algorithm

    DTIC Science & Technology

    2010-04-01

    target selection prioritization scheme, aim point, mode of fire, and estimates on Phit /Pmiss for a single SME. Also undertaken in this phase of the...5 APPENDIX A: SME FUZZY ESTIMATES ON FACTORS AND ESTIMATES ON PHIT /PMISS.....6...influencing the target selection prioritization scheme, aim point, mode of fire, and estimates on Phit /Pmiss for a single SME. Also undertaken in this

  2. Estimating Total Heliospheric Magnetic Flux from Single-Point in Situ Measurements

    NASA Technical Reports Server (NTRS)

    Owens, M. J.; Arge, C. N.; Crooker, N. U.; Schwardron, N. A.; Horbury, T. S.

    2008-01-01

    A fraction of the total photospheric magnetic flux opens to the heliosphere to form the interplanetary magnetic field carried by the solar wind. While this open flux is critical to our understanding of the generation and evolution of the solar magnetic field, direct measurements are generally limited to single-point measurements taken in situ by heliospheric spacecraft. An observed latitude invariance in the radial component of the magnetic field suggests that extrapolation from such single-point measurements to total heliospheric magnetic flux is possible. In this study we test this assumption using estimates of total heliospheric flux from well-separated heliospheric spacecraft and conclude that single-point measurements are indeed adequate proxies for the total heliospheric magnetic flux, though care must be taken when comparing flux estimates from data collected at different heliocentric distances.

  3. Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a SingleDownwind High-Frequency Gas Sensor

    EPA Science Inventory

    Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a Single Downwind High-Frequency Gas Sensor With the tremendous advances in onshore oil and gas exploration and production (E&P) capability comes the realization that new tools are needed to support env...

  4. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  5. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  6. Evaluation of single-point sampling strategies for the estimation of moclobemide exposure in depressive patients.

    PubMed

    Ignjatovic, Anita Rakic; Miljkovic, Branislava; Todorovic, Dejan; Timotijevic, Ivana; Pokrajac, Milena

    2011-05-01

    Because moclobemide pharmacokinetics vary considerably among individuals, monitoring of plasma concentrations lends insight into its pharmacokinetic behavior and enhances its rational use in clinical practice. The aim of this study was to evaluate whether single concentration-time points could adequately predict moclobemide systemic exposure. Pharmacokinetic data (full 7-point pharmacokinetic profiles), obtained from 21 depressive inpatients receiving moclobemide (150 mg 3 times daily), were randomly split into development (n = 18) and validation (n = 16) sets. Correlations between the single concentration-time points and the area under the concentration-time curve within a 6-hour dosing interval at steady-state (AUC(0-6)) were assessed by linear regression analyses. The predictive performance of single-point sampling strategies was evaluated in the validation set by mean prediction error, mean absolute error, and root mean square error. Plasma concentrations in the absorption phase yielded unsatisfactory predictions of moclobemide AUC(0-6). The best estimation of AUC(0-6) was achieved from concentrations at 4 and 6 hours following dosing. As the most reliable surrogate for moclobemide systemic exposure, concentrations at 4 and 6 hours should be used instead of predose trough concentrations as an indicator of between-patient variability and a guide for dose adjustments in specific clinical situations.

  7. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  8. An SVM-based classifier for estimating the state of various rotating components in agro-industrial machinery with a vibration signal acquired from a single point on the machine chassis.

    PubMed

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-11-03

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  9. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  10. Convergence of Newton's method for a single real equation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.

  11. Ecosystem approach to fisheries: Exploring environmental and trophic effects on Maximum Sustainable Yield (MSY) reference point estimates

    PubMed Central

    Kumar, Rajeev; Pitcher, Tony J.; Varkey, Divya A.

    2017-01-01

    We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY) reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE) ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1) environmental conditions alter species productivity and (2) fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed. PMID:28957387

  12. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials.

    PubMed

    Machado, L A C; Kamper, S J; Herbert, R D; Maher, C G; McAuley, J H

    2009-05-01

    Estimates of treatment effects reported in placebo-controlled randomized trials are less subject to bias than those estimates provided by other study designs. The objective of this meta-analysis was to estimate the analgesic effects of treatments for non-specific low back pain reported in placebo-controlled randomized trials. Medline, Embase, Cinahl, PsychInfo and Cochrane Central Register of Controlled Trials databases were searched for eligible trials from earliest records to November 2006. Continuous pain outcomes were converted to a common 0-100 scale and pooled using a random effects model. A total of 76 trials reporting on 34 treatments were included. Fifty percent of the investigated treatments had statistically significant effects, but for most the effects were small or moderate: 47% had point estimates of effects of <10 points on the 100-point scale, 38% had point estimates from 10 to 20 points and 15% had point estimates of >20 points. Treatments reported to have large effects (>20 points) had been investigated only in a single trial. This meta-analysis revealed that the analgesic effects of many treatments for non-specific low back pain are small and that they do not differ in populations with acute or chronic symptoms.

  13. Wave directional spreading from point field measurements.

    PubMed

    McAllister, M L; Venugopal, V; Borthwick, A G L

    2017-04-01

    Ocean waves have multidirectional components. Most wave measurements are taken at a single point, and so fail to capture information about the relative directions of the wave components directly. Conventional means of directional estimation require a minimum of three concurrent time series of measurements at different spatial locations in order to derive information on local directional wave spreading. Here, the relationship between wave nonlinearity and directionality is utilized to estimate local spreading without the need for multiple concurrent measurements, following Adcock & Taylor (Adcock & Taylor 2009 Proc. R. Soc. A 465 , 3361-3381. (doi:10.1098/rspa.2009.0031)), with the assumption that directional spreading is frequency independent. The method is applied to measurements recorded at the North Alwyn platform in the northern North Sea, and the results compared against estimates of wave spreading by conventional measurement methods and hindcast data. Records containing freak waves were excluded. It is found that the method provides accurate estimates of wave spreading over a range of conditions experienced at North Alwyn, despite the noisy chaotic signals that characterize such ocean wave data. The results provide further confirmation that Adcock and Taylor's method is applicable to metocean data and has considerable future promise as a technique to recover estimates of wave spreading from single point wave measurement devices.

  14. Wave directional spreading from point field measurements

    PubMed Central

    Venugopal, V.; Borthwick, A. G. L.

    2017-01-01

    Ocean waves have multidirectional components. Most wave measurements are taken at a single point, and so fail to capture information about the relative directions of the wave components directly. Conventional means of directional estimation require a minimum of three concurrent time series of measurements at different spatial locations in order to derive information on local directional wave spreading. Here, the relationship between wave nonlinearity and directionality is utilized to estimate local spreading without the need for multiple concurrent measurements, following Adcock & Taylor (Adcock & Taylor 2009 Proc. R. Soc. A 465, 3361–3381. (doi:10.1098/rspa.2009.0031)), with the assumption that directional spreading is frequency independent. The method is applied to measurements recorded at the North Alwyn platform in the northern North Sea, and the results compared against estimates of wave spreading by conventional measurement methods and hindcast data. Records containing freak waves were excluded. It is found that the method provides accurate estimates of wave spreading over a range of conditions experienced at North Alwyn, despite the noisy chaotic signals that characterize such ocean wave data. The results provide further confirmation that Adcock and Taylor's method is applicable to metocean data and has considerable future promise as a technique to recover estimates of wave spreading from single point wave measurement devices. PMID:28484326

  15. Lidars for smoke and dust cloud diagnostics

    NASA Astrophysics Data System (ADS)

    Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.

    1980-11-01

    An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.

  16. 48 CFR 242.302 - Contract administration functions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Contractor estimating systems (see FAR 15.407-5); and (B) Contractor material management and accounting... report identifying significant accounting system or related internal control deficiencies. (9) For... solicitation or award. (S-70) Serve as the single point of contact for all Single Process Initiative (SPI...

  17. Estimating two-way tables based on forest surveys

    Treesearch

    Charles T. Scott

    2000-01-01

    Forest survey analysts usually are interested in tables of values rather than single point estimates. A common error is to include only plots on which nonzero values of the attribute were observed when computing the variance of a mean. Similarly, analysts often exclude nonforest plots from the analysis. The development of the correct estimates of forest area, attribute...

  18. Galaxy clustering with photometric surveys using PDF redshift information

    DOE PAGES

    Asorey, J.; Carrasco Kind, M.; Sevilla-Noarbe, I.; ...

    2016-03-28

    Here, photometric surveys produce large-area maps of the galaxy distribution, but with less accurate redshift information than is obtained from spectroscopic methods. Modern photometric redshift (photo-z) algorithms use galaxy magnitudes, or colors, that are obtained through multi-band imaging to produce a probability density function (PDF) for each galaxy in the map. We used simulated data to study the effect of using different photo-z estimators to assign galaxies to redshift bins in order to compare their effects on angular clustering and galaxy bias measurements. We found that if we use the entire PDF, rather than a single-point (mean or mode) estimate, the deviations are less biased, especially when using narrow redshift bins. When the redshift bin widths aremore » $$\\Delta z=0.1$$, the use of the entire PDF reduces the typical measurement bias from 5%, when using single point estimates, to 3%.« less

  19. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    PubMed

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.

  20. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  1. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  2. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  3. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  4. Estimation of point source fugitive emission rates from a single sensor time series: a conditionally-sampled Gaussian plume reconstruction

    EPA Science Inventory

    This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...

  5. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  6. Principal component analysis of binding energies for single-point mutants of hT2R16 bound to an agonist correlate with experimental mutant cell response.

    PubMed

    Chen, Derek E; Willick, Darryl L; Ruckel, Joseph B; Floriano, Wely B

    2015-01-01

    Directed evolution is a technique that enables the identification of mutants of a particular protein that carry a desired property by successive rounds of random mutagenesis, screening, and selection. This technique has many applications, including the development of G protein-coupled receptor-based biosensors and designer drugs for personalized medicine. Although effective, directed evolution is not without challenges and can greatly benefit from the development of computational techniques to predict the functional outcome of single-point amino acid substitutions. In this article, we describe a molecular dynamics-based approach to predict the effects of single amino acid substitutions on agonist binding (salicin) to a human bitter taste receptor (hT2R16). An experimentally determined functional map of single-point amino acid substitutions was used to validate the whole-protein molecular dynamics-based predictive functions. Molecular docking was used to construct a wild-type agonist-receptor complex, providing a starting structure for single-point substitution simulations. The effects of each single amino acid substitution in the functional response of the receptor to its agonist were estimated using three binding energy schemes with increasing inclusion of solvation effects. We show that molecular docking combined with molecular mechanics simulations of single-point mutants of the agonist-receptor complex accurately predicts the functional outcome of single amino acid substitutions in a human bitter taste receptor.

  7. An analysis of estimation of pulmonary blood flow by the single-breath method

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.

    1986-01-01

    The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.

  8. SU-E-I-49: Influence of Scanner Output Measurement Technique on KERMA Ratios in CT.

    PubMed

    Ogden, K; Roskopf, M; Scalzetti, E

    2012-06-01

    KERMA ratios (RK) are defined as the ratio of KERMA measured at a specific phantom location (K) to in-air isocenter CT scanner output (KCT). In this work we investigate the impact of measurement methodology on KCT values. OSL dosimeter chips were used to measure KCT for a GE VCT scanner (GE Medical Systems, Waukesha WI), using the 40 mm nominal beam width. Methods included a single point measurement at the center of the beam (1 tube rotation), and extended z-axis measurements using multiple adjacent OSL's (7.5 cm extent), with single tube rotation, multiple contiguous axial scans, and helical scans (pitch of 1.375). Measurements were made in air and on the scan table at 80 and 120 kV. Averaged single point measurements were consistent, with a mean coefficient of variation of 2.5%. For extended measurements with a single tube rotation, the mean value was equivalent to the single point measurements. For multiple contiguous axial scans, the in-air KCT values were higher than the single rotation mean value and single point measurements by 13% and 10.3% at 120 and 80 kV, respectively, and for the on-table measurements the values were 14.9% and 8.1% higher at 120 and 80 kV, respectively. The increase is due to beam overlap caused by z- axis over-beaming. Extended measurements using helical scanning were equivalent to the multiple rotation axial measurements when corrected for the helical pitch. For all methodologies, the in-air values exceeded the on- table measurements by an average of 23% and 19.4% at 80 and 120 kV, respectively. Scanner KCT values must be measured to allow organ dose estimation using published RK values. It is imperative that the KCT measurement methodology is the same as for the published values, or large errors may be introduced into the resulting organ dose estimates. © 2012 American Association of Physicists in Medicine.

  9. Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.

    PubMed

    Haber, Aleksandar; Verhaegen, Michel

    2016-11-15

    We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.

  10. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    PubMed

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  11. Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods

    PubMed Central

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J.

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228

  12. Health insurance tax credits, the earned income tax credit, and health insurance coverage of single mothers.

    PubMed

    Cebi, Merve; Woodbury, Stephen A

    2014-05-01

    The Omnibus Budget Reconciliation Act of 1990 enacted a refundable tax credit for low-income working families who purchased health insurance coverage for their children. This health insurance tax credit (HITC) existed during tax years 1991, 1992, and 1993, and was then rescinded. A difference-in-differences estimator applied to Current Population Survey data suggests that adoption of the HITC, along with accompanying increases in the Earned Income Tax Credit (EITC), was associated with a relative increase of about 4.7 percentage points in the private health insurance coverage of working single mothers with high school or less education. Also, a difference-in-difference-in-differences estimator, which attempts to net out the possible influence of the EITC increases but which requires strong assumptions, suggests that the HITC was responsible for about three-quarters (3.6 percentage points) of the total increase. The latter estimate implies a price elasticity of health insurance take-up of -0.42. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Single-lens stereovision system using a prism: position estimation of a multi-ocular prism.

    PubMed

    Cui, Xiaoyu; Lim, Kah Bin; Zhao, Yue; Kee, Wei Loon

    2014-05-01

    In this paper, a position estimation method using a prism-based single-lens stereovision system is proposed. A multifaced prism was considered as a single optical system composed of few refractive planes. A transformation matrix which relates the coordinates of an object point to its coordinates on the image plane through the refraction of the prism was derived based on geometrical optics. A mathematical model which is able to denote the position of an arbitrary faces prism with only seven parameters is introduced. This model further extends the application of the single-lens stereovision system using a prism to other areas. Experimentation results are presented to prove the effectiveness and robustness of our proposed model.

  14. Single-Specimen Technique to Establish the J-Resistance of Linear Viscoelastic Solids with Constant Poisson's Ratio

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; McCool, Alex (Technical Monitor)

    2001-01-01

    A method is developed to establish the J-resistance function for an isotropic linear viscoelastic solid of constant Poisson's ratio using the single-specimen technique with constant-rate test data. The method is based on the fact that, for a test specimen of fixed crack size under constant rate, the initiation J-integral may be established from the crack size itself, the actual external load and load-point displacement at growth initiation, and the relaxation modulus of the viscoelastic solid, without knowledge of the complete test record. Since crack size alone, of the required data, would be unknown at each point of the load-vs-load-point displacement curve of a single-specimen test, an expression is derived to estimate it. With it, the physical J-integral at each point of the test record may be established. Because of its basis on single-specimen testing, not only does the method not require the use of multiple specimens with differing initial crack sizes, but avoids the need for tracking crack growth as well.

  15. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jihui; Zakhor, Avideh

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  16. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE PAGES

    Jin, Jihui; Zakhor, Avideh

    2017-01-29

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  17. Effect of rifampicin on the pharmacokinetics and pharmacodynamics of saxagliptin, a dipeptidyl peptidase-4 inhibitor, in healthy subjects

    PubMed Central

    Upreti, Vijay V; Boulton, David W; Li, Li; Ching, Agatha; Su, Hong; LaCreta, Frank P; Patel, Chirag G

    2011-01-01

    AIM To investigate the effect of co-administration of rifampicin, a potent inducer of cytochrome P450 (CYP) 3A4 enzymes, on the pharmacokinetics (PK) and pharmacodynamics (PD) of saxagliptin and 5-hydroxy saxagliptin in healthy subjects. Saxagliptin is metabolized by CYP3A4/3A5 to 5-hydroxy saxagliptin, its major pharmacologically active metabolite. METHODS In a non-randomized, open label, single sequence design, 14 healthy subjects received single oral doses of saxagliptin 5 mg with and without steady-state rifampicin (600 mg once daily for 6 days). PK (saxagliptin and 5-hydroxy saxagliptin) and PD (plasma DPP-4 activity) were measured for up to 24 h on days 1 and 7. RESULTS Concomitant administration with rifampicin resulted in 53% (point estimate 0.47, 90% CI 0.38, 0.57) and 76% (point estimate 0.24, 90% CI 0.21, 0.27) decreases in the geometric mean Cmax and AUC values of saxagliptin, respectively, with a 39% (point estimate 1.39, 90% CI 1.23, 1.56) increase in the geometric mean Cmax and no change (point estimate 1.03, 90% CI 0.97, 1.09) in the AUC of 5-hydroxy saxagliptin. Similar maximum % inhibition and area under the % inhibition−time effect curve over 24 h for DPP-4 activity were observed when saxagliptin was administered alone or with rifampicin. The saxagliptin total active moieties exposure (AUC) decreased by 27% (point estimate 0.73, 90% CI 0.66, 0.81). Saxagliptin with or without rifampicin in this study was generally well tolerated. CONCLUSIONS Lack of change of PD effect of saxagliptin is consistent with the observed 27% reduction in systemic exposure to the total active moieties, which is not considered clinically meaningful. Based on these findings, it is not necessary to adjust the saxagliptin dose when co-administered with rifampicin. PMID:21651615

  18. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity. PMID:28234899

  19. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs.

    PubMed

    Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson

    2017-02-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.

  20. Identification of Intensity Ratio Break Points from Photon Arrival Trajectories in Ratiometric Single Molecule Spectroscopy

    PubMed Central

    Bingemann, Dieter; Allen, Rachel M.

    2012-01-01

    We describe a statistical method to analyze dual-channel photon arrival trajectories from single molecule spectroscopy model-free to identify break points in the intensity ratio. Photons are binned with a short bin size to calculate the logarithm of the intensity ratio for each bin. Stochastic photon counting noise leads to a near-normal distribution of this logarithm and the standard student t-test is used to find statistically significant changes in this quantity. In stochastic simulations we determine the significance threshold for the t-test’s p-value at a given level of confidence. We test the method’s sensitivity and accuracy indicating that the analysis reliably locates break points with significant changes in the intensity ratio with little or no error in realistic trajectories with large numbers of small change points, while still identifying a large fraction of the frequent break points with small intensity changes. Based on these results we present an approach to estimate confidence intervals for the identified break point locations and recommend a bin size to choose for the analysis. The method proves powerful and reliable in the analysis of simulated and actual data of single molecule reorientation in a glassy matrix. PMID:22837704

  1. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    NASA Astrophysics Data System (ADS)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  2. Experimental measure of arm stiffness during single reaching movements with a time-frequency analysis

    PubMed Central

    Pierobon, Alberto; DiZio, Paul; Lackner, James R.

    2013-01-01

    We tested an innovative method to estimate joint stiffness and damping during multijoint unfettered arm movements. The technique employs impulsive perturbations and a time-frequency analysis to estimate the arm's mechanical properties along a reaching trajectory. Each single impulsive perturbation provides a continuous estimation on a single-reach basis, making our method ideal to investigate motor adaptation in the presence of force fields and to study the control of movement in impaired individuals with limited kinematic repeatability. In contrast with previous dynamic stiffness studies, we found that stiffness varies during movement, achieving levels higher than during static postural control. High stiffness was associated with elevated reflexive activity. We observed a decrease in stiffness and a marked reduction in long-latency reflexes around the reaching movement velocity peak. This pattern could partly explain the difference between the high stiffness reported in postural studies and the low stiffness measured in dynamic estimation studies, where perturbations are typically applied near the peak velocity point. PMID:23945781

  3. Investigating the use of multi-point coupling for single-sensor bearing estimation in one direction

    NASA Astrophysics Data System (ADS)

    Woolard, Americo G.; Phoenix, Austin A.; Tarazaga, Pablo A.

    2018-04-01

    Bearing estimation of radially propagating symmetric waves in solid structures typically requires a minimum of two sensors. As a test specimen, this research investigates the use of multi-point coupling to provide directional inference using a single-sensor. By this provision, the number of sensors required for localization can be reduced. A finite-element model of a beam is constructed with a symmetrically placed bipod that has asymmetric joint-stiffness properties. Impulse loading is applied at different points along the beam, and measurements are taken from the apex of the bipod. A technique is developed to determine the direction-of-arrival of the propagating wave. The accuracy when using the bipod with the developed technique is compared against results gathered without the bipod and measuring from an asymmetric location along the beam. The results show 92% accuracy when the bipod is used, compared to 75% when measuring without the bipod from an asymmetric location. A geometry investigation finds the best accuracy results when one leg of the bipod has a low stiffness and a large diameter relative to the other leg.

  4. Improved Time-Lapsed Angular Scattering Microscopy of Single Cells

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.

    By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With our improved and better-understood approach to single cell angular scattering, we are now capable of reliably detecting changes in organelle size predictions due to biological causes above our measurement error of 20 nm, which enables us to apply our system to future studies of the investigation of various single cell biological processes.

  5. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  6. Estimating brain connectivity when few data points are available: Perspectives and limitations.

    PubMed

    Antonacci, Yuri; Toppi, Jlenia; Caschera, Stefano; Anzolin, Alessandra; Mattia, Donatella; Astolfi, Laura

    2017-07-01

    Methods based on the use of multivariate autoregressive modeling (MVAR) have proved to be an accurate and flexible tool for the estimation of brain functional connectivity. The multivariate approach, however, implies the use of a model whose complexity (in terms of number of parameters) increases quadratically with the number of signals included in the problem. This can often lead to an underdetermined problem and to the condition of multicollinearity. The aim of this paper is to introduce and test an approach based on Ridge Regression combined with a modified version of the statistics usually adopted for these methods, to broaden the estimation of brain connectivity to those conditions in which current methods fail, due to the lack of enough data points. We tested the performances of this new approach, in comparison with the classical approach based on ordinary least squares (OLS), by means of a simulation study implementing different ground-truth networks, under different network sizes and different levels of data points. Simulation results showed that the new approach provides better performances, in terms of accuracy of the parameters estimation and false positives/false negatives rates, in all conditions related to a low data points/model dimension ratio, and may thus be exploited to estimate and validate estimated patterns at single-trial level or when short time data segments are available.

  7. The effect of vapor polarity and boiling point on breakthrough for binary mixtures on respirator carbon.

    PubMed

    Robbins, C A; Breysse, P N

    1996-08-01

    This research evaluated the effect of the polarity of a second vapor on the adsorption of a polar and a nonpolar vapor using the Wheeler model. To examine the effect of polarity, it was also necessary to observe the effect of component boiling point. The 1% breakthrough time (1% tb), kinetic adsorption capacity (W(e)), and rate constant (kv) of the Wheeler model were determined for vapor challenges on carbon beds for both p-xylene and pyrrole (referred to as test vapors) individually, and in equimolar binary mixtures with the polar and nonpolar vapors toluene, p-fluorotoluene, o-dichlorobenzene, and p-dichlorobenzene (referred to as probe vapors). Probe vapor polarity (0 to 2.5 Debye) did not systematically alter the 1% tb, W(e), or kv of the test vapors. The 1% tb and W(e) for test vapors in binary mixtures can be estimated reasonably well, using the Wheeler model, from single-vapor data (1% tb +/- 30%, W(e) +/- 20%). The test vapor 1% tb depended mainly on total vapor concentration in both single and binary systems. W(e) was proportional to test vapor fractional molar concentration (mole fraction) in mixtures. The kv for p-xylene was significantly different (p < or = 0.001) when compared according to probe boiling point; however, these differences were apparently of limited importance in estimating 1% tb for the range of boiling points tested (111 to 180 degrees C). Although the polarity and boiling point of chemicals in the range tested are not practically important in predicting 1% tb with the Wheeler model, an effect due to probe boiling point is suggested, and tests with chemicals of more widely ranging boiling point are warranted. Since the 1% tb, and thus, respirator service life, depends mainly on total vapor concentration, these data underscore the importance of taking into account the presence of other vapors when estimating respirator service life for a vapor in a mixture.

  8. Groundwater flux estimation in streams: A thermal equilibrium approach

    USGS Publications Warehouse

    Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon K.

    2018-01-01

    Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash–Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.

  9. Groundwater flux estimation in streams: A thermal equilibrium approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon

    2018-06-01

    Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.

  10. Bottom Pressure Tides Along a Line in the Southeast Atlantic Ocean and Comparisons with Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Byrne, Deidre A.

    2010-01-01

    Seafloor pressure records, collected at 11 stations aligned along a single ground track of the Topex/Poseidon and Jason satellites, are analyzed for their tidal content. With very low background noise levels and approximately 27 months of high-quality records, tidal constituents can be estimated with unusually high precision. This includes many high-frequency lines up through the seventh-diurnal band. The station deployment provides a unique opportunity to compare with tides estimated from satellite altimetry, point by point along the satellite track, in a region of moderately high mesoscale variability. That variability can significantly corrupt altimeter-based tide estimates, even with 17 years of data. A method to improve the along-track altimeter estimates by correcting the data for nontidal variability is found to yield much better agreement with the bottom-pressure data. The technique should prove useful in certain demanding applications, such as altimetric studies of internal tides.

  11. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  12. Improving credibility and transparency of conservation impact evaluations through the partial identification approach.

    PubMed

    McConnachie, Matthew M; Romero, Claudia; Ferraro, Paul J; van Wilgen, Brian W

    2016-04-01

    The fundamental challenge of evaluating the impact of conservation interventions is that researchers must estimate the difference between the outcome after an intervention occurred and what the outcome would have been without it (counterfactual). Because the counterfactual is unobservable, researchers must make an untestable assumption that some units (e.g., organisms or sites) that were not exposed to the intervention can be used as a surrogate for the counterfactual (control). The conventional approach is to make a point estimate (i.e., single number along with a confidence interval) of impact, using, for example, regression. Point estimates provide powerful conclusions, but in nonexperimental contexts they depend on strong assumptions about the counterfactual that often lack transparency and credibility. An alternative approach, called partial identification (PI), is to first estimate what the counterfactual bounds would be if the weakest possible assumptions were made. Then, one narrows the bounds by using stronger but credible assumptions based on an understanding of why units were selected for the intervention and how they might respond to it. We applied this approach and compared it with conventional approaches by estimating the impact of a conservation program that removed invasive trees in part of the Cape Floristic Region. Even when we used our largest PI impact estimate, the program's control costs were 1.4 times higher than previously estimated. PI holds promise for applications in conservation science because it encourages researchers to better understand and account for treatment selection biases; can offer insights into the plausibility of conventional point-estimate approaches; could reduce the problem of advocacy in science; might be easier for stakeholders to agree on a bounded estimate than a point estimate where impacts are contentious; and requires only basic arithmetic skills. © 2015 Society for Conservation Biology.

  13. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  14. Automated estimation of leaf distribution for individual trees based on TLS point clouds

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus

    2017-04-01

    Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.

  15. ASSESSMENT of POTENTIAL CARBON DIOXIDE-BASED DEMAND CONTROL VENTILATION SYSTEM PERFORMANCE in SINGLE ZONE SYSTEMS

    DTIC Science & Technology

    2013-03-21

    and timers use a time-based estimate to predict how many people are in a facility at a given point in the day. CO2-based DCV systems measure CO2...energy and latent energy from the outside air when the coils’ surface temperature is below the dew point of the air passing over the coils (ASHRAE...model assumes that the dew point water saturation pressure is the same as the dry-bulb water vapor pressure, consistent with a typical ASHRAE

  16. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  17. Method for measuring thermal properties using a long-wavelength infrared thermal image

    DOEpatents

    Walker, Charles L [Albuquerque, NM; Costin, Laurence S [Albuquerque, NM; Smith, Jody L [Albuquerque, NM; Moya, Mary M [Albuquerque, NM; Mercier, Jeffrey A [Albuquerque, NM

    2007-01-30

    A method for estimating the thermal properties of surface materials using long-wavelength thermal imagery by exploiting the differential heating histories of ground points in the vicinity of shadows. The use of differential heating histories of different ground points of the same surface material allows the use of a single image acquisition step to provide the necessary variation in measured parameters for calculation of the thermal properties of surface materials.

  18. Design with limited anthropometric data: A method of interpreting sums of percentiles in anthropometric design.

    PubMed

    Albin, Thomas J

    2017-07-01

    Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Modification and Validation of the Triglyceride-to-HDL Cholesterol Ratio as a Surrogate of Insulin Sensitivity in White Juveniles and Adults without Diabetes Mellitus: The Single Point Insulin Sensitivity Estimator (SPISE).

    PubMed

    Paulmichl, Katharina; Hatunic, Mensud; Højlund, Kurt; Jotic, Aleksandra; Krebs, Michael; Mitrakou, Asimina; Porcellati, Francesca; Tura, Andrea; Bergsten, Peter; Forslund, Anders; Manell, Hannes; Widhalm, Kurt; Weghuber, Daniel; Anderwald, Christian-Heinz

    2016-09-01

    The triglyceride-to-HDL cholesterol (TG/HDL-C) ratio was introduced as a tool to estimate insulin resistance, because circulating lipid measurements are available in routine settings. Insulin, C-peptide, and free fatty acids are components of other insulin-sensitivity indices but their measurement is expensive. Easier and more affordable tools are of interest for both pediatric and adult patients. Study participants from the Relationship Between Insulin Sensitivity and Cardiovascular Disease [43.9 (8.3) years, n = 1260] as well as the Beta-Cell Function in Juvenile Diabetes and Obesity study cohorts [15 (1.9) years, n = 29] underwent oral-glucose-tolerance tests and euglycemic clamp tests for estimation of whole-body insulin sensitivity and calculation of insulin sensitivity indices. To refine the TG/HDL ratio, mathematical modeling was applied including body mass index (BMI), fasting TG, and HDL cholesterol and compared to the clamp-derived M-value as an estimate of insulin sensitivity. Each modeling result was scored by identifying insulin resistance and correlation coefficient. The Single Point Insulin Sensitivity Estimator (SPISE) was compared to traditional insulin sensitivity indices using area under the ROC curve (aROC) analysis and χ(2) test. The novel formula for SPISE was computed as follows: SPISE = 600 × HDL-C(0.185)/(TG(0.2) × BMI(1.338)), with fasting HDL-C (mg/dL), fasting TG concentrations (mg/dL), and BMI (kg/m(2)). A cutoff value of 6.61 corresponds to an M-value smaller than 4.7 mg · kg(-1) · min(-1) (aROC, M:0.797). SPISE showed a significantly better aROC than the TG/HDL-C ratio. SPISE aROC was comparable to the Matsuda ISI (insulin sensitivity index) and equal to the QUICKI (quantitative insulin sensitivity check index) and HOMA-IR (homeostasis model assessment-insulin resistance) when calculated with M-values. The SPISE seems well suited to surrogate whole-body insulin sensitivity from inexpensive fasting single-point blood draw and BMI in white adolescents and adults. © 2016 American Association for Clinical Chemistry.

  20. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  1. Statistical properties of four effect-size measures for mediation models.

    PubMed

    Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C

    2018-02-01

    This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.

  2. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  3. Bayesian Framework Approach for Prognostic Studies in Electrolytic Capacitor under Thermal Overstress Conditions

    DTIC Science & Technology

    2012-09-01

    make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma

  4. Filter Function for Wavefront Sensing Over a Field of View

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.

  5. Parallel, stochastic measurement of molecular surface area.

    PubMed

    Juba, Derek; Varshney, Amitabh

    2008-08-01

    Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.

  6. Single toxin dose-response models revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less

  7. Robust nonparametric quantification of clustering density of molecules in single-molecule localization microscopy

    PubMed Central

    Jiang, Shenghang; Park, Seongjin; Challapalli, Sai Divya; Fei, Jingyi; Wang, Yong

    2017-01-01

    We report a robust nonparametric descriptor, J′(r), for quantifying the density of clustering molecules in single-molecule localization microscopy. J′(r), based on nearest neighbor distribution functions, does not require any parameter as an input for analyzing point patterns. We show that J′(r) displays a valley shape in the presence of clusters of molecules, and the characteristics of the valley reliably report the clustering features in the data. Most importantly, the position of the J′(r) valley (rJm′) depends exclusively on the density of clustering molecules (ρc). Therefore, it is ideal for direct estimation of the clustering density of molecules in single-molecule localization microscopy. As an example, this descriptor was applied to estimate the clustering density of ptsG mRNA in E. coli bacteria. PMID:28636661

  8. Analysis of the phase transition in the two-dimensional Ising ferromagnet using a Lempel-Ziv string-parsing scheme and black-box data-compression utilities

    NASA Astrophysics Data System (ADS)

    Melchert, O.; Hartmann, A. K.

    2015-02-01

    In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.

  9. Proof of concept Laplacian estimate derived for noninvasive tripolar concentric ring electrode with incorporated radius of the central disc and the widths of the concentric rings.

    PubMed

    Makeyev, Oleksandr; Lee, Colin; Besio, Walter G

    2017-07-01

    Tripolar concentric ring electrodes are showing great promise in a range of applications including braincomputer interface and seizure onset detection due to their superiority to conventional disc electrodes, in particular, in accuracy of surface Laplacian estimation. Recently, we proposed a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. This approach has been used to introduce novel multipolar and variable inter-ring distances concentric ring electrode configurations verified using finite element method. The obtained results suggest their potential to improve Laplacian estimation compared to currently used constant interring distances tripolar concentric ring electrodes. One of the main limitations of the proposed (4n + 1)-point method is that the radius of the central disc and the widths of the concentric rings are not included and therefore cannot be optimized. This study incorporates these two parameters by representing the central disc and both concentric rings as clusters of points with specific radius and widths respectively as opposed to the currently used single point and concentric circles. A proof of concept Laplacian estimate is derived for a tripolar concentric ring electrode with non-negligible radius of the central disc and non-negligible widths of the concentric rings clearly demonstrating how both of these parameters can be incorporated into the (4n + 1)-point method.

  10. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  11. 32 CFR 525.4 - Entry authorization (policy).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...

  12. 32 CFR 525.4 - Entry authorization (policy).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...

  13. A Comparison of Quantum and Molecular Mechanical Methods to Estimate Strain Energy in Druglike Fragments.

    PubMed

    Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto

    2017-06-26

    Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.

  14. Reliability of infrared thermometric measurements of skin temperature in the hand.

    PubMed

    Packham, Tara L; Fok, Diana; Frederiksen, Karen; Thabane, Lehana; Buckley, Norman

    2012-01-01

    Clinical measurement study. Skin temperature asymmetries (STAs) are used in the diagnosis of complex regional pain syndrome (CRPS), but little evidence exists for reliability of the equipment and methods. This study examined the reliability of an inexpensive infrared (IR) thermometer and measurement points in the hand for the study of STA. ST was measured three times at five points on both hands with an IR thermometer by two raters in 20 volunteers (12 normals and 8 CRPS). ST measurement results using IR thermometers support inter-rater reliability: intraclass correlation coefficient (ICC) estimate for single measures 0.80; all ST measurement points were also highly reliable (ICC single measures, 0.83-0.91). The equipment demonstrated excellent reliability, with little difference in the reliability of the five measurement sites. These preliminary findings support their use in future CRPS research. Not applicable. Copyright © 2012 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  15. Mild cognitive impairment: baseline and longitudinal structural MR imaging measures improve predictive prognosis.

    PubMed

    McEvoy, Linda K; Holland, Dominic; Hagler, Donald J; Fennema-Notestine, Christine; Brewer, James B; Dale, Anders M

    2011-06-01

    To assess whether single-time-point and longitudinal volumetric magnetic resonance (MR) imaging measures provide predictive prognostic information in patients with amnestic mild cognitive impairment (MCI). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Written informed consent was obtained from all participants or the participants' legal guardians. Cross-validated discriminant analyses of MR imaging measures were performed to differentiate 164 Alzheimer disease (AD) cases from 203 healthy control cases. Separate analyses were performed by using data from MR images obtained at one time point or by combining single-time-point measures with 1-year change measures. Resulting discriminant functions were applied to 317 MCI cases to derive individual patient risk scores. Risk of conversion to AD was estimated as a continuous function of risk score percentile. Kaplan-Meier survival curves were computed for risk score quartiles. Odds ratios (ORs) for the conversion to AD were computed between the highest and lowest quartile scores. Individualized risk estimates from baseline MR examinations indicated that the 1-year risk of conversion to AD ranged from 3% to 40% (average group risk, 17%; OR, 7.2 for highest vs lowest score quartiles). Including measures of 1-year change in global and regional volumes significantly improved risk estimates (P = 001), with the risk of conversion to AD in the subsequent year ranging from 3% to 69% (average group risk, 27%; OR, 12.0 for highest vs lowest score quartiles). Relative to the risk of conversion to AD conferred by the clinical diagnosis of MCI alone, MR imaging measures yield substantially more informative patient-specific risk estimates. Such predictive prognostic information will be critical if disease-modifying therapies become available. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101975/-/DC1. RSNA, 2011

  16. A multi-calibrated mitochondrial phylogeny of extant Bovidae (Artiodactyla, Ruminantia) and the importance of the fossil record to systematics.

    PubMed

    Bibi, Faysal

    2013-08-08

    Molecular phylogenetics has provided unprecedented resolution in the ruminant evolutionary tree. However, molecular age estimates using only one or a few (often misapplied) fossil calibration points have produced a diversity of conflicting ages for important evolutionary events within this clade. I here identify 16 fossil calibration points of relevance to the phylogeny of Bovidae and Ruminantia and use these, individually and together, to construct a dated molecular phylogeny through a reanalysis of the full mitochondrial genome of over 100 ruminant species. The new multi-calibrated tree provides ages that are younger overall than found in previous studies. Among these are young ages for the origin of crown Ruminantia (39.3-28.8 Ma), and crown Bovidae (17.3-15.1 Ma). These are argued to be reasonable hypotheses given that many basal fossils assigned to these taxa may in fact lie on the stem groups leading to the crown clades, thus inflating previous age estimates. Areas of conflict between molecular and fossil dates do persist, however, especially with regard to the base of the rapid Pecoran radiation and the sister relationship of Moschidae to Bovidae. Results of the single-calibrated analyses also show that a very wide range of molecular age estimates are obtainable using different calibration points, and that the choice of calibration point can influence the topology of the resulting tree. Compared to the single-calibrated trees, the multi-calibrated tree exhibits smaller variance in estimated ages and better reflects the fossil record. The use of a large number of vetted fossil calibration points with soft bounds is promoted as a better approach than using just one or a few calibrations, or relying on internal-congruency metrics to discard good fossil data. This study also highlights the importance of considering morphological and ecological characteristics of clades when delimiting higher taxa. I also illustrate how phylogeographic and paleoenvironmental hypotheses inferred from a tree containing only extant taxa can be problematic without consideration of the fossil record. Incorporating the fossil record of Ruminantia is a necessary step for future analyses aiming to reconstruct the evolutionary history of this clade.

  17. Multi-Axis Identifiability Using Single-Surface Parameter Estimation Maneuvers on the X-48B Blended Wing Body

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Koshimoto, Ed T.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body type aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aero- dynamic control effectors that act in coplanar motion. This fact adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of system inputs must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, asymmetric, single-surface maneuvers are used to excite multiple axes of aircraft motion simultaneously. Time history reconstructions of the moment coefficients computed by the solved regression models are then compared to each other in order to assess relative model accuracy. The reduced flight-test time required for inner surface parameter estimation using multi-axis methods was found to come at the cost of slightly reduced accuracy and statistical confidence for linear regression methods. Since the multi-axis maneuvers captured parameter estimates similar to both longitudinal and lateral-directional maneuvers combined, the number of test points required for the inner, aileron-like surfaces could in theory have been reduced by 50%. While trends were similar, however, individual parameters as estimated by a multi-axis model were typically different by an average absolute difference of roughly 15-20%, with decreased statistical significance, than those estimated by a single-axis model. The multi-axis model exhibited an increase in overall fit error of roughly 1-5% for the linear regression estimates with respect to the single-axis model, when applied to flight data designed for each, respectively.

  18. Presymptomatic atrophy in autosomal dominant Alzheimer's disease: A serial magnetic resonance imaging study.

    PubMed

    Kinnunen, Kirsi M; Cash, David M; Poole, Teresa; Frost, Chris; Benzinger, Tammie L S; Ahsan, R Laila; Leung, Kelvin K; Cardoso, M Jorge; Modat, Marc; Malone, Ian B; Morris, John C; Bateman, Randall J; Marcus, Daniel S; Goate, Alison; Salloway, Stephen P; Correia, Stephen; Sperling, Reisa A; Chhatwal, Jasmeer P; Mayeux, Richard P; Brickman, Adam M; Martins, Ralph N; Farlow, Martin R; Ghetti, Bernardino; Saykin, Andrew J; Jack, Clifford R; Schofield, Peter R; McDade, Eric; Weiner, Michael W; Ringman, John M; Thompson, Paul M; Masters, Colin L; Rowe, Christopher C; Rossor, Martin N; Ourselin, Sebastien; Fox, Nick C

    2018-01-01

    Identifying at what point atrophy rates first change in Alzheimer's disease is important for informing design of presymptomatic trials. Serial T1-weighted magnetic resonance imaging scans of 94 participants (28 noncarriers, 66 carriers) from the Dominantly Inherited Alzheimer Network were used to measure brain, ventricular, and hippocampal atrophy rates. For each structure, nonlinear mixed-effects models estimated the change-points when atrophy rates deviate from normal and the rates of change before and after this point. Atrophy increased after the change-point, which occurred 1-1.5 years (assuming a single step change in atrophy rate) or 3-8 years (assuming gradual acceleration of atrophy) before expected symptom onset. At expected symptom onset, estimated atrophy rates were at least 3.6 times than those before the change-point. Atrophy rates are pathologically increased up to seven years before "expected onset". During this period, atrophy rates may be useful for inclusion and tracking of disease progression. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  19. A preliminary report on the genetic variation in pointed gourd (Trichosanthes dioica Roxb.) as assessed by random amplified polymorphic DNA.

    PubMed

    Adhikari, S; Biswas, A; Bandyopadhyay, T K; Ghosh, P D

    2014-06-01

    Pointed gourd (Trichosanthes dioica Roxb.) is an economically important cucurbit and is extensively propagated through vegetative means, viz vine and root cuttings. As the accessions are poorly characterized it is important at the beginning of a breeding programme to discriminate among available genotypes to establish the level of genetic diversity. The genetic diversity of 10 pointed gourd races, referred to as accessions was evaluated. DNA profiling was generated using 10 sequence independent RAPD markers. A total of 58 scorable loci were observed out of which 18 (31.03%) loci were considered polymorphic. Genetic diversity parameters [average and effective number of alleles, Shannon's index, percent polymorphism, Nei's gene diversity, polymorphic information content (PIC)] for RAPD along with UPGMA clustering based on Jaccard's coefficient were estimated. The UPGMA dendogram constructed based on RAPD analysis in 10 pointed gourd accessions were found to be grouped in a single cluster and may represent members of one heterotic group. RAPD analysis showed promise as an effective tool in estimating genetic polymorphism in different accessions of pointed gourd.

  20. Anatomy guided automated SPECT renal seed point estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  1. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.

    PubMed

    Matthaeus, W H; Weygand, J M; Dasso, S

    2016-06-17

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  2. One Small Step for a Man: Estimation of Gender, Age and Height from Recordings of One Step by a Single Inertial Sensor

    PubMed Central

    Riaz, Qaiser; Vögele, Anna; Krüger, Björn; Weber, Andreas

    2015-01-01

    A number of previous works have shown that information about a subject is encoded in sparse kinematic information, such as the one revealed by so-called point light walkers. With the work at hand, we extend these results to classifications of soft biometrics from inertial sensor recordings at a single body location from a single step. We recorded accelerations and angular velocities of 26 subjects using integrated measurement units (IMUs) attached at four locations (chest, lower back, right wrist and left ankle) when performing standardized gait tasks. The collected data were segmented into individual walking steps. We trained random forest classifiers in order to estimate soft biometrics (gender, age and height). We applied two different validation methods to the process, 10-fold cross-validation and subject-wise cross-validation. For all three classification tasks, we achieve high accuracy values for all four sensor locations. From these results, we can conclude that the data of a single walking step (6D: accelerations and angular velocities) allow for a robust estimation of the gender, height and age of a person. PMID:26703601

  3. Distributed processing of a GPS receiver network for a regional ionosphere map

    NASA Astrophysics Data System (ADS)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  4. [Estimation of urban non-point source pollution loading and its factor analysis in the Pearl River Delta].

    PubMed

    Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long

    2013-08-01

    In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.

  5. Order Under Uncertainty: Robust Differential Expression Analysis Using Probabilistic Models for Pseudotime Inference

    PubMed Central

    Campbell, Kieran R.

    2016-01-01

    Single cell gene expression profiling can be used to quantify transcriptional dynamics in temporal processes, such as cell differentiation, using computational methods to label each cell with a ‘pseudotime’ where true time series experimentation is too difficult to perform. However, owing to the high variability in gene expression between individual cells, there is an inherent uncertainty in the precise temporal ordering of the cells. Pre-existing methods for pseudotime estimation have predominantly given point estimates precluding a rigorous analysis of the implications of uncertainty. We use probabilistic modelling techniques to quantify pseudotime uncertainty and propagate this into downstream differential expression analysis. We demonstrate that reliance on a point estimate of pseudotime can lead to inflated false discovery rates and that probabilistic approaches provide greater robustness and measures of the temporal resolution that can be obtained from pseudotime inference. PMID:27870852

  6. An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.

    2004-01-01

    This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.

  7. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  8. Methods for Assessment of Species Richness and Occupancy Across Space, Time, Taxonomic Groups, and Ecoregions

    DTIC Science & Technology

    2017-03-26

    logistic constraints and associated travel time between points in the central and western Great Basin. The geographic and temporal breadth of our...surveys (MacKenzie and Royle 2005). In most cases, less time is spent traveling between sites on a given day when the single-day design is implemented...with the single-day design (110 hr). These estimates did not include return- travel time , which did not limit sampling effort. As a result, we could

  9. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    PubMed Central

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  10. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  11. Estimation of Drug Effectiveness by Modeling Three Time-dependent Covariates: An Application to Data on Cardioprotective Medications in the Chronic Dialysis Population

    PubMed Central

    Phadnis, Milind A.; Shireman, Theresa I.; Wetmore, James B.; Rigler, Sally K.; Zhou, Xinhua; Spertus, John A.; Ellerbeck, Edward F.; Mahnken, Jonathan D.

    2014-01-01

    In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient’s switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy. PMID:25343005

  12. Estimation of Drug Effectiveness by Modeling Three Time-dependent Covariates: An Application to Data on Cardioprotective Medications in the Chronic Dialysis Population.

    PubMed

    Phadnis, Milind A; Shireman, Theresa I; Wetmore, James B; Rigler, Sally K; Zhou, Xinhua; Spertus, John A; Ellerbeck, Edward F; Mahnken, Jonathan D

    2014-01-01

    In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient's switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy.

  13. Uncertainty Propagation Methods for High-Dimensional Complex Systems

    NASA Astrophysics Data System (ADS)

    Mukherjee, Arpan

    Researchers are developing ever smaller aircraft called Micro Aerial Vehicles (MAVs). The Space Robotics Group has joined the field by developing a dragonfly-inspired MAV. This thesis presents two contributions to this project. The first is the development of a dynamical model of the internal MAV components to be used for tuning design parameters and as a future plant model. This model is derived using the Lagrangian method and differs from others because it accounts for the internal dynamics of the system. The second contribution of this thesis is an estimation algorithm that can be used to determine prototype performance and verify the dynamical model from the first part. Based on the Gauss-Newton Batch Estimator, this algorithm uses a single camera and known points of interest on the wing to estimate the wing kinematic angles. Unlike other single-camera methods, this method is probabilistically based rather than being geometric.

  14. Single point estimation of phenytoin dosing: a reappraisal.

    PubMed

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  15. The Application of Function Points to Predict Source Lines of Code for Software Development

    DTIC Science & Technology

    1992-09-01

    there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available

  16. A comparison of capillary and rotational viscometry of aqueous solutions of hypromellose.

    PubMed

    Sklubalová, Z; Zatloukal, Z

    2007-10-01

    A comparison of capillary and rotational viscometry of gentle pseudoplastic solutions of hypromellose (HPMC 4000) by using only single-point value of viscosity is difficult. Single-point comparison becomes topical in consequence to the pharmacopoeial requirement that the apparent viscosity of 2% hypromellose solution should be read at the shear rate of approximately 10 s(-1). This communication is focused on the estimation of the suitable shear rate, D eta, at which the apparent viscosity read using the rotational viscometer is numerically equal to the dynamic viscosity read using a capillary viscometer. For the solutions of HPMC in concentrations up to 2% w/v, the non-linear regression equations generated showed the influencing of the D eta value by the dynamic viscosity and/or by the originally derived linear velocity of the solution flowing through the capillary viscometer tube. To compare the apparent viscosity read using the rotational viscometer with the dynamic viscosity read using capillary viscometer, the exact estimation of the shear rate D eta at which both viscosities are numerically equal is essential since it is markedly affected by the concentration of HPMC solution.

  17. Bird biodiversity assessments in temperate forest: the value of point count versus acoustic monitoring protocols.

    PubMed

    Klingbeil, Brian T; Willig, Michael R

    2015-01-01

    Effective monitoring programs for biodiversity are needed to assess trends in biodiversity and evaluate the consequences of management. This is particularly true for birds and faunas that occupy interior forest and other areas of low human population density, as these are frequently under-sampled compared to other habitats. For birds, Autonomous Recording Units (ARUs) have been proposed as a supplement or alternative to point counts made by human observers to enhance monitoring efforts. We employed two strategies (i.e., simultaneous-collection and same-season) to compare point count and ARU methods for quantifying species richness and composition of birds in temperate interior forests. The simultaneous-collection strategy compares surveys by ARUs and point counts, with methods matched in time, location, and survey duration such that the person and machine simultaneously collect data. The same-season strategy compares surveys from ARUs and point counts conducted at the same locations throughout the breeding season, but methods differ in the number, duration, and frequency of surveys. This second strategy more closely follows the ways in which monitoring programs are likely to be implemented. Site-specific estimates of richness (but not species composition) differed between methods; however, the nature of the relationship was dependent on the assessment strategy. Estimates of richness from point counts were greater than estimates from ARUs in the simultaneous-collection strategy. Woodpeckers in particular, were less frequently identified from ARUs than point counts with this strategy. Conversely, estimates of richness were lower from point counts than ARUs in the same-season strategy. Moreover, in the same-season strategy, ARUs detected the occurrence of passerines at a higher frequency than did point counts. Differences between ARU and point count methods were only detected in site-level comparisons. Importantly, both methods provide similar estimates of species richness and composition for the region. Consequently, if single visits to sites or short-term monitoring are the goal, point counts will likely perform better than ARUs, especially if species are rare or vocalize infrequently. However, if seasonal or annual monitoring of sites is the goal, ARUs offer a viable alternative to standard point-count methods, especially in the context of large-scale or long-term monitoring of temperate forest birds.

  18. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  19. A spectral geometric model for Compton single scatter in PET based on the single scatter simulation approximation

    NASA Astrophysics Data System (ADS)

    Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.

    2018-02-01

    We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.

  20. Detection of single nano-defects in photonic crystals between crossed polarizers.

    PubMed

    Grepstad, Jon Olav; Kaspar, Peter; Johansen, Ib-Rune; Solgaard, Olav; Sudbø, Aasmund

    2013-12-16

    We investigate, by simulations and experiments, the light scattering of small particles trapped in photonic crystal membranes supporting guided resonance modes. Our results show that, due to amplified Rayleigh small particle scattering, such membranes can be utilized to make a sensor that can detect single nano-particles. We have designed a biomolecule sensor that uses cross-polarized excitation and detection for increased sensitivity. Estimated using Rayleigh scattering theory and simulation results, the current fabricated sensor has a detection limit of 26 nm, corresponding to the size of a single virus. The sensor can potentially be made both cheap and compact, to facilitate use at point-of-care.

  1. Estimating IMU heading error from SAR images.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin Walter

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  2. The development and implementation of a method using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    EPA Science Inventory

    It is estimated that protozoan parasites still account for greater than one third of waterborne disease outbreaks reported. Methods used to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminan...

  3. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  5. Threat perception while viewing single intruder conflicts on a cockpit display of traffic information

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Palmer, E.

    1982-01-01

    Subjective estimates of the threat posed by a single intruder aircraft were determined by showing pilots photographs of a cockpit display of traffic information. The time the intruder was away from the point of minimum separation was found to be the major determinant of the perception of threat. When asked to choose a maneuver to reduce the conflict, pilots selected maneuvers with a bias toward those that would have kept the intruders in sight had they been visible out the cockpit window.

  6. HIITE: HIV-1 incidence and infection time estimator.

    PubMed

    Park, Sung Yong; Love, Tanzy M T; Kapoor, Shivankur; Lee, Ha Youn

    2018-06-15

    Around 2.1 million new HIV-1 infections were reported in 2015, alerting that the HIV-1 epidemic remains a significant global health challenge. Precise incidence assessment strengthens epidemic monitoring efforts and guides strategy optimization for prevention programs. Estimating the onset time of HIV-1 infection can facilitate optimal clinical management and identify key populations largely responsible for epidemic spread and thereby infer HIV-1 transmission chains. Our goal is to develop a genomic assay estimating the incidence and infection time in a single cross-sectional survey setting. We created a web-based platform, HIV-1 incidence and infection time estimator (HIITE), which processes envelope gene sequences using hierarchical clustering algorithms and informs the stage of infection, along with time since infection for incident cases. HIITE's performance was evaluated using 585 incident and 305 chronic specimens' envelope gene sequences collected from global cohorts including HIV-1 vaccine trial participants. HIITE precisely identified chronically infected individuals as being chronic with an error less than 1% and correctly classified 94% of recently infected individuals as being incident. Using a mixed-effect model, an incident specimen's time since infection was estimated from its single lineage diversity, showing 14% prediction error for time since infection. HIITE is the first algorithm to inform two key metrics from a single time point sequence sample. HIITE has the capacity for assessing not only population-level epidemic spread but also individual-level transmission events from a single survey, advancing HIV prevention and intervention programs. Web-based HIITE and source code of HIITE are available at http://www.hayounlee.org/software.html. Supplementary data are available at Bioinformatics online.

  7. Conceptual Model Evaluation using Advanced Parameter Estimation Techniques with Heat as a Tracer

    NASA Astrophysics Data System (ADS)

    Naranjo, R. C.; Morway, E. D.; Healy, R. W.

    2016-12-01

    Temperature measurements made at multiple depths beneath the sediment-water interface has proven useful for estimating seepage rates from surface-water channels and corresponding subsurface flow direction. Commonly, parsimonious zonal representations of the subsurface structure are defined a priori by interpretation of temperature envelopes, slug tests or analysis of soil cores. However, combining multiple observations into a single zone may limit the inverse model solution and does not take full advantage of the information content within the measured data. Further, simulating the correct thermal gradient, flow paths, and transient behavior of solutes may be biased by inadequacies in the spatial description of subsurface hydraulic properties. The use of pilot points in PEST offers a more sophisticated approach to estimate the structure of subsurface heterogeneity. This presentation evaluates seepage estimation in a cross-sectional model of a trapezoidal canal with intermittent flow representing four typical sedimentary environments. The recent improvements in heat as a tracer measurement techniques (i.e. multi-depth temperature probe) along with use of modern calibration techniques (i.e., pilot points) provides opportunities for improved calibration of flow models, and, subsequently, improved model predictions.

  8. Ruminal bacteria and protozoa composition, digestibility, and amino acid profile determined by multiple hydrolysis times.

    PubMed

    Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E

    2017-09-01

    Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Estimating needle tip deflection in biological tissue from a single transverse ultrasound image: application to brachytherapy.

    PubMed

    Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi

    2016-07-01

    This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.

  10. The ground-truth problem for satellite estimates of rain rate

    NASA Technical Reports Server (NTRS)

    North, Gerald R.; Valdes, Juan B.; Eunho, HA; Shen, Samuel S. P.

    1994-01-01

    In this paper a scheme is proposed to use a point raingage to compare contemporaneous measurements of rain rate from a single-field-of-view (FOV) estimate based on a satellite remote sensor such as a microwave radiometer. Even in the ideal case the measurements are different because one is at a point and the other is an area average over the field of view. Also the point gage will be located randomly inside the field of view on different overpasses. A space-time spectral formalism is combined with a simple stochastic rain field to find the mean-square deviations between the two systems. It is found that by combining about 60 visits of the satellite to the ground-truth site, the expected error can be reduced to about 10% of the standard deviation of the fluctuations of the systems alone. This seems to be a useful level of tolerance in terms of isolating and evaluating typical biases that might be contaminating retrieval algorithms.

  11. A multi-calibrated mitochondrial phylogeny of extant Bovidae (Artiodactyla, Ruminantia) and the importance of the fossil record to systematics

    PubMed Central

    2013-01-01

    Background Molecular phylogenetics has provided unprecedented resolution in the ruminant evolutionary tree. However, molecular age estimates using only one or a few (often misapplied) fossil calibration points have produced a diversity of conflicting ages for important evolutionary events within this clade. I here identify 16 fossil calibration points of relevance to the phylogeny of Bovidae and Ruminantia and use these, individually and together, to construct a dated molecular phylogeny through a reanalysis of the full mitochondrial genome of over 100 ruminant species. Results The new multi-calibrated tree provides ages that are younger overall than found in previous studies. Among these are young ages for the origin of crown Ruminantia (39.3–28.8 Ma), and crown Bovidae (17.3–15.1 Ma). These are argued to be reasonable hypotheses given that many basal fossils assigned to these taxa may in fact lie on the stem groups leading to the crown clades, thus inflating previous age estimates. Areas of conflict between molecular and fossil dates do persist, however, especially with regard to the base of the rapid Pecoran radiation and the sister relationship of Moschidae to Bovidae. Results of the single-calibrated analyses also show that a very wide range of molecular age estimates are obtainable using different calibration points, and that the choice of calibration point can influence the topology of the resulting tree. Compared to the single-calibrated trees, the multi-calibrated tree exhibits smaller variance in estimated ages and better reflects the fossil record. Conclusions The use of a large number of vetted fossil calibration points with soft bounds is promoted as a better approach than using just one or a few calibrations, or relying on internal-congruency metrics to discard good fossil data. This study also highlights the importance of considering morphological and ecological characteristics of clades when delimiting higher taxa. I also illustrate how phylogeographic and paleoenvironmental hypotheses inferred from a tree containing only extant taxa can be problematic without consideration of the fossil record. Incorporating the fossil record of Ruminantia is a necessary step for future analyses aiming to reconstruct the evolutionary history of this clade. PMID:23927069

  12. A revised timescale for human evolution based on ancient mitochondrial genomes

    PubMed Central

    Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes

    2016-01-01

    Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248

  13. A revised timescale for human evolution based on ancient mitochondrial genomes.

    PubMed

    Fu, Qiaomei; Mittnik, Alissa; Johnson, Philip L F; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes

    2013-04-08

    Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Here, we use mitochondrial genome sequences from ten securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) that occurred less than 62-95 kya. Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population divergence times, they can provide valid upper bounds. Our results exclude most of the older dates for African and non-African population divergences recently suggested by de novo mutation rate estimates in the nuclear genome. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  15. Limited sampling strategy for determining metformin area under the plasma concentration–time curve

    PubMed Central

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José; Christensen, Mette Marie Hougaard; Brosen, Kim

    2016-01-01

    Aim The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration–time curve (AUC) for metformin. Methods Metformin plasma concentrations (n = 627) at 0–24 h after a single 500 mg dose were used for LSS development, based on all subsets linear regression analysis. The LSS‐derived AUC(0,24 h) was compared with the parameter ‘best estimate’ obtained by non‐compartmental analysis using all plasma concentration data points. Correlation between the LSS‐derived and the best estimated AUC(0,24 h) (r 2), bias and precision of the LSS estimates were quantified. The LSS models were validated in independent cohorts. Results A two‐point (3 h and 10 h) regression equation with no intercept estimated accurately the individual AUC(0,24 h) in the development cohort: r 2 = 0.927, bias (mean, 95% CI) –0.5, −2.7–1.8% and precision 6.3, 4.9–7.7%. The accuracy of the two point LSS model was verified in study cohorts of individuals receiving single 500 or 1000 mg (r 2 = –0.933–0.934) or seven 1000 mg daily doses (r 2 = 0.918), as well as using data from 16 published studies covering a wide range of metformin doses, demographics, clinical and experimental conditions (r 2 = 0.976). The LSS model reproduced previously reported results for effects of polymorphisms in OCT2 and MATE1 genes on AUC(0,24 h) and renal clearance of metformin. Conclusions The two point LSS algorithm may be used to assess the systemic exposure to metformin under diverse conditions, with reduced costs of sampling and analysis, and saving time for both subjects and investigators. PMID:27324407

  16. densityCut: an efficient and versatile topological approach for automatic clustering of biological data

    PubMed Central

    Ding, Jiarui; Shah, Sohrab; Condon, Anne

    2016-01-01

    Motivation: Many biological data processing problems can be formalized as clustering problems to partition data points into sensible and biologically interpretable groups. Results: This article introduces densityCut, a novel density-based clustering algorithm, which is both time- and space-efficient and proceeds as follows: densityCut first roughly estimates the densities of data points from a K-nearest neighbour graph and then refines the densities via a random walk. A cluster consists of points falling into the basin of attraction of an estimated mode of the underlining density function. A post-processing step merges clusters and generates a hierarchical cluster tree. The number of clusters is selected from the most stable clustering in the hierarchical cluster tree. Experimental results on ten synthetic benchmark datasets and two microarray gene expression datasets demonstrate that densityCut performs better than state-of-the-art algorithms for clustering biological datasets. For applications, we focus on the recent cancer mutation clustering and single cell data analyses, namely to cluster variant allele frequencies of somatic mutations to reveal clonal architectures of individual tumours, to cluster single-cell gene expression data to uncover cell population compositions, and to cluster single-cell mass cytometry data to detect communities of cells of the same functional states or types. densityCut performs better than competing algorithms and is scalable to large datasets. Availability and Implementation: Data and the densityCut R package is available from https://bitbucket.org/jerry00/densitycut_dev. Contact: condon@cs.ubc.ca or sshah@bccrc.ca or jiaruid@cs.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153661

  17. The behavioral economics of drug self-administration: A review and new analytical approach for within-session procedures

    PubMed Central

    Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary

    2012-01-01

    Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021

  18. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  19. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.

  20. Investigating the relationship between neighborhood poverty and mortality risk: A marginal structural modeling approach

    PubMed Central

    Do, D. Phuong; Wang, Lu; Elliott, Michael R.

    2013-01-01

    Extant observational studies generally support the existence of a link between neighborhood context and health. However, estimating the causal impact of neighborhood effects from observational data has proven to be a challenge. Omission of relevant factors may lead to overestimating the effects of neighborhoods on health while inclusion of time-varying confounders that may also be mediators (e.g., income, labor force status) may lead to underestimation. Using longitudinal data from the 1990 to 2007 years of the Panel Study of Income Dynamics, this study investigates the link between neighborhood poverty and overall mortality risk. A marginal structural modeling strategy is employed to appropriately adjust for simultaneous mediating and confounding factors. To address the issue of possible upward bias from the omission of key variables, sensitivity analysis to assess the robustness of results against unobserved confounding is conducted. We examine two continuous measures of neighborhood poverty – single-point and a running average. Both were specified as piece-wise linear splines with a knot at 20 percent. We found no evidence from the traditional naïve strategy that neighborhood context influences mortality risk. In contrast, for both the single-point and running average neighborhood poverty specifications, the marginal structural model estimates indicated a statistically significant increase in mortality risk with increasing neighborhood poverty above the 20 percent threshold. For example, below 20 percent neighborhood poverty, no association was found. However, after the 20 percent poverty threshold is reached, each 10 percentage point increase in running average neighborhood poverty was found to increase the odds for mortality by 89 percent [95% CI = 1.22, 2.91]. Sensitivity analysis indicated that estimates were moderately robust to omitted variable bias. PMID:23849239

  1. Fear Perceptions in Public Parks: Interactions of Environmental Concealment, the Presence of People Recreating, and Gender

    ERIC Educational Resources Information Center

    Jorgensen, Lisa J.; Ellis, Gary D.; Ruddell, Edward

    2013-01-01

    This research examined the effect of concealment (environmental cues), presence or absence of people recreating (social cues), and gender on individuals' fear of crime in a community park setting. Using a 7-point single-item indicator, 732 participants from two samples (540 park visitors and 192 college students) rated their estimates of fear of…

  2. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  3. Public safety answering point readiness for wireless E-911 in New York State.

    PubMed

    Bailey, Bob W; Scott, Jay M; Brown, Lawrence H

    2003-01-01

    To determine the level of wireless enhanced 911 readiness among New York's primary public safety answering points. This descriptive study utilized a simple, single-page survey that was distributed in August 2001, with telephone follow-up concluding in January 2002. Surveys were distributed to directors of the primary public safety answering points in each of New York's 62 counties. Information was requested regarding current readiness for providing wireless enhanced 911 service, hardware and software needs for implementing the service, and the estimated costs for obtaining the necessary hardware and software. Two directors did not respond and could not be contacted by telephone; three declined participation; one did not operate an answering point; and seven provided incomplete responses, resulting in usable data from 49 (79%) of the state's public safety answering points. Only 27% of the responding public safety answering points were currently wireless enhanced 911 ready. Specific needs included obtaining or upgrading computer systems (16%), computer-aided dispatch systems (53%), mapping software (71%), telephone systems (27%), and local exchange carrier trunk lines (42%). The total estimated hardware and software costs for achieving wireless enhanced 911 readiness was between 16 million and 20 million dollars. New York's primary public safety answering points are not currently ready to provide wireless enhanced 911 service, and the cost for achieving readiness could be as high as 20 million dollars.

  4. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  5. User's manual for the Graphical Constituent Loading Analysis System (GCLAS)

    USGS Publications Warehouse

    Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.

    2006-01-01

    This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.

  6. Estimation of the laser cutting operating cost by support vector regression methodology

    NASA Astrophysics Data System (ADS)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  7. A minimalist approach to bias estimation for passive sensor measurements with targets of opportunity

    NASA Astrophysics Data System (ADS)

    Belfadel, Djedjiga; Osborne, Richard W.; Bar-Shalom, Yaakov

    2013-09-01

    In order to carry out data fusion, registration error correction is crucial in multisensor systems. This requires estimation of the sensor measurement biases. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. This paper provides a solution for bias estimation for the minimum number of passive sensors (two), when only targets of opportunity are available. The sensor measurements are assumed time-coincident (synchronous) and perfectly associated. Since these sensors provide only line of sight (LOS) measurements, the formation of a single composite Cartesian measurement obtained from fusing the LOS measurements from different sensors is needed to avoid the need for nonlinear filtering. We evaluate the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.

  8. Fusion of Cross-Track TerraSAR-X PS Point Clouds over Las Vegas

    NASA Astrophysics Data System (ADS)

    Wang, Ziyun; Balz, Timo; Wei, Lianhuan; Liao, Mingsheng

    2014-11-01

    Persistent scatterer interferometry (PS-InSAR) is widely used in radar remote sensing. However, because the surface motion is estimated in the line-of-sight (LOS) direction, it is not possible to differentiate between vertical and horizontal surface motions from a single stack. Cross-track data, i.e. the combination of data from ascending and descending orbits, allows us to better analyze the deformation and to obtain 3d motion information. We implemented a cross-track fusion of PS-InSAR point cloud data, making it possible to separate the vertical and horizontal components of the surface motion.

  9. Resolution Measurement from a Single Reconstructed Cryo-EM Density Map with Multiscale Spectral Analysis.

    PubMed

    Yang, Yu-Jiao; Wang, Shuai; Zhang, Biao; Shen, Hong-Bin

    2018-06-25

    As a relatively new technology to solve the three-dimensional (3D) structure of a protein or protein complex, single-particle reconstruction (SPR) of cryogenic electron microscopy (cryo-EM) images shows much superiority and is in a rapidly developing stage. Resolution measurement in SPR, which evaluates the quality of a reconstructed 3D density map, plays a critical role in promoting methodology development of SPR and structural biology. Because there is no benchmark map in the generation of a new structure, how to realize the resolution estimation of a new map is still an open problem. Existing approaches try to generate a hypothetical benchmark map by reconstructing two 3D models from two halves of the original 2D images for cross-reference, which may result in a premature estimation with a half-data model. In this paper, we report a new self-reference-based resolution estimation protocol, called SRes, that requires only a single reconstructed 3D map. The core idea of SRes is to perform a multiscale spectral analysis (MSSA) on the map through multiple size-variable masks segmenting the map. The MSSA-derived multiscale spectral signal-to-noise ratios (mSSNRs) reveal that their corresponding estimated resolutions will show a cliff jump phenomenon, indicating a significant change in the SSNR properties. The critical point on the cliff borderline is demonstrated to be the right estimator for the resolution of the map.

  10. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  11. Flux estimation of the FIFE planetary boundary layer (PBL) with 10.6 micron Doppler lidar

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1990-01-01

    A method is devised for calculating wind, momentum, and other flux parameters that characterize the planetary boundary layer (PBL) and thereby facilitate the calibration of spaceborne vs. in situ flux estimates. Single Doppler lidar data are used to estimate the variance of the mean wind and the covariance related to the vertically pointing fluxes of horizontal momentum. The skewness of the vertical velocity and the range of kinetic energy dissipation are also estimated, and the surface heat flux is determined by means of a statistical Navier-Stokes equation. The conclusion shows that the PBL structure combines both 'bottom-up' and 'top-down' processes suggesting that the relevant parameters for the atmospheric boundary layer be revised. The conclusions are of significant interest to the modeling techniques used in General Circulation Models as well as to flux estimation.

  12. Scanning tunneling spectroscopy and Dirac point resonances due to a single Co adatom on gated graphene

    NASA Astrophysics Data System (ADS)

    Saffarzadeh, Alireza; Kirczenow, George

    2012-06-01

    Based on the standard tight-binding model of the graphene π-band electronic structure, the extended Hückel model for the adsorbate and graphene carbon atoms, and spin splittings estimated from density functional theory (DFT), the Dirac point resonances due to a single cobalt atom on graphene are studied. The relaxed geometry of the magnetic adsorbate and the graphene is calculated using DFT. The system shows strong spin polarization in the vicinity of the graphene Dirac point energy for all values of the gate voltage, due to the spin splitting of Co 3d orbitals. We also model the differential conductance spectra for this system that have been measured in the scanning tunneling microscopy (STM) experiments of Brar [Nat. Phys.1745-247310.1038/nphys1807 7, 43 (2011)]. We interpret the experimentally observed behavior of the S-peak in the STM differential conductance spectrum as evidence of tunneling between the STM tip and a cobalt-induced Dirac point resonant state of the graphene, via a Co 3d orbital. The cobalt ionization state which is determined by the energy position of the resonance can be tuned by gate voltage, similar to that seen in the experiment.

  13. Triana Safehold: A New Gyroless, Sun-Pointing Attitude Controller

    NASA Technical Reports Server (NTRS)

    Chen, J.; Morgenstern, Wendy; Garrick, Joseph

    2001-01-01

    Triana is a single-string spacecraft to be placed in a halo orbit about the sun-earth Ll Lagrangian point. The Attitude Control Subsystem (ACS) hardware includes four reaction wheels, ten thrusters, six coarse sun sensors, a star tracker, and a three-axis Inertial Measuring Unit (IMU). The ACS Safehold design features a gyroless sun-pointing control scheme using only sun sensors and wheels. With this minimum hardware approach, Safehold increases mission reliability in the event of a gyroscope anomaly. In place of the gyroscope rate measurements, Triana Safehold uses wheel tachometers to help provide a scaled estimation of the spacecraft body rate about the sun vector. Since Triana nominally performs momentum management every three months, its accumulated system momentum can reach a significant fraction of the wheel capacity. It is therefore a requirement for Safehold to maintain a sun-pointing attitude even when the spacecraft system momentum is reasonably large. The tachometer sun-line rate estimation enables the controller to bring the spacecraft close to its desired sun-pointing attitude even with reasonably high system momentum and wheel drags. This paper presents the design rationale behind this gyroless controller, stability analysis, and some time-domain simulation results showing performances with various initial conditions. Finally, suggestions for future improvements are briefly discussed.

  14. Simultaneous narrowband ultrasonic strain-flow imaging

    NASA Astrophysics Data System (ADS)

    Tsou, Jean K.; Mai, Jerome J.; Lupotti, Fermin A.; Insana, Michael F.

    2004-04-01

    We are summarizing new research aimed at forming spatially and temporally registered combinations of strain and color-flow images using echo data recorded from a commercial ultrasound system. Applications include diagnosis of vascular diseases and tumor malignancies. The challenge is to meet the diverse needs of each measurement. The approach is to first apply eigenfilters that separate echo components from moving tissues and blood flow, and then estimate blood velocity and tissue displacement from the filtered-IQ-signal phase modulations. At the cost of a lower acquisition frame rate, we find the autocorrelation strain estimator yields higher resolution strain estimate than the cross-correlator since estimates are made from ensembles at a single point in space. The technique is applied to in vivo carotid imaging, to demonstrate the sensitivity for strain-flow vascular imaging.

  15. Estimating the Benefits of the Air Force Purchasing and Supply Chain Management Initiative

    DTIC Science & Technology

    2008-01-01

    sector, known as strategic sourcing.6 The Customer Relationship Management initiative ( CRM ) pro- vides a single customer point of contact for all... Customer Relationship Management initiative. commodity council A term used to describe a cross-functional sourc- ing group charged with formulating a...initiative has four major components, all based on commercial best practices (Gabreski, 2004): commodity councils customer relationship management

  16. Making a Way to Success: Self-Authorship and Academic Achievement of First-Year African American Students at Historically Black Colleges

    ERIC Educational Resources Information Center

    Strayhorn, Terrell L.

    2014-01-01

    The purpose of the study was to estimate the relationship between academic achievement in college, as defined by first-year grade point average (GPA), and self-authorship among African American first-year students at an HBCU (N = 140), using hierarchical linear regression techniques. A single research question guided this investigation: What is…

  17. Significant Discrepancy Between Estimated and Actual Longevity in St. Jude Medical Implantable Cardioverter-Defibrillators.

    PubMed

    Doppalapudi, Harish; Barrios, James; Cuellar, Jose; Gannon, Melanie; Yamada, Takumi; Kumar, Vineet; Maddox, William R; Plumb, Vance J; Brown, Todd M; McElderry, H Tom

    2017-05-01

    Real-time estimated longevity has been reported in pacemakers for several years, and was recently introduced in implantable cardioverter-defibrillators (ICDs). We sought to evaluate the accuracy of this longevity estimate in St. Jude Medical (SJM) ICDs, especially as the device battery approaches depletion. Among patients with SJM ICDs who underwent generator replacements due to reaching elective replacement indicator (ERI) at our institution, we identified those with devices that provided longevity estimates and reviewed their device interrogations in the 18 months prior to ERI. Significant discrepancy was defined as a difference of more than 12 months between estimated and actual longevity at any point during this period. Forty-six patients with Current/Promote devices formed the study group (40 cardiac resynchronization therapy [CRT] and 6 single/dual chamber). Of these, 34 (74%) had significant discrepancy between estimated and actual longevity (28 CRT and all single/dual). Longevity was significantly overestimated by the device algorithm (mean maximum discrepancy of 18.8 months), more in single/dual than CRT devices (30.5 vs. 17.1 months). Marked discrepancy was seen at voltages ≥2.57 volts, with maximum discrepancy at 2.57 volts (23 months). The overall longevity was higher in the discrepant group of CRT devices than in the nondiscrepant group (67 vs. 61 months, log-rank P = 0.03). There was significant overestimation of longevity in nearly three-fourths of Current/Promote SJM ICDs in the last 18 months prior to ERI. Longevity estimates of SJM ICDs may not be reliable for making clinical decisions on frequency of follow-up, as the battery approaches depletion. © 2017 Wiley Periodicals, Inc.

  18. Identification of bearing faults using time domain zero-crossings

    NASA Astrophysics Data System (ADS)

    William, P. E.; Hoffman, M. W.

    2011-11-01

    In this paper, zero-crossing characteristic features are employed for early detection and identification of single point bearing defects in rotating machinery. As a result of bearing defects, characteristic defect frequencies appear in the machine vibration signal, normally requiring spectral analysis or envelope analysis to identify the defect type. Zero-crossing features are extracted directly from the time domain vibration signal using only the duration between successive zero-crossing intervals and do not require estimation of the rotational frequency. The features are a time domain representation of the composite vibration signature in the spectral domain. Features are normalized by the length of the observation window and classification is performed using a multilayer feedforward neural network. The model was evaluated on vibration data recorded using an accelerometer mounted on an induction motor housing subjected to a number of single point defects with different severity levels.

  19. A single 24 h recall overestimates exclusive breastfeeding practices among infants aged less than six months in rural Ethiopia.

    PubMed

    Fenta, Esete Habtemariam; Yirgu, Robel; Shikur, Bilal; Gebreyesus, Seifu Hagos

    2017-01-01

    Exclusive breastfeeding (EBF) to six months is one of the World Health Organization's (WHOs) infant and young child feeding (IYCF) core indicators. Single 24 h recall method is currently in use to measure exclusive breastfeeding practice among children of age less than six months. This approach overestimates the prevalence of EBF, especially among small population groups. This justifies the need to look for alternative measurement techniques to have a valid estimate regardless of population characteristics. The study involved 422 infants of age less than six months, living in Gurage zone, Southern Ethiopia. The study was conducted from January to February 2016. Child feeding practices were measured for seven consecutive days using 24 h recall method. Recall since birth, was used to measure breastfeeding practices from birth to the day of data collection. Data on EBF obtained by using single 24 h recall were compared with seven days repeated 24 h recall method. McNemar's test was done to assess if a significant difference existed in rates of EBF between measurement methods. The mean age of infants in months was 3 (SD -1.43). Exclusive breastfeeding prevalence was highest (76.7%; 95% CI 72.6, 80.8) when EBF was estimated using single 24 h recall. The prevalence of EBF based on seven repeated 24 h recall was 53.2% (95% CI: 48.3, 58.0). The estimated prevalence of EBF since birth based on retrospective data (recall since birth) was 50.2% (95% CI 45.4, 55.1). Compared to the EBF estimates obtained from seven repeated 24 h recall, single 24 h recall overestimated EBF magnitude by 23 percentage points (95% CI 19.2, 27.8). As the number of days of 24 h recall increased, a significant decrease in overestimation of EBF was observed. A significant overestimation was observed when single 24 h recall was used to estimate prevalence of EBF compared to seven days of 24 h recall. By increasing the observation days we can significantly decrease the degree of overestimation. Recall since birth presented estimates of EBF that is close to seven repeated 24 h recall. This suggests that a week recall could be an alternative indicator to single 24 h recall.

  20. RBS, XRR and optical reflectivity measurements of Ti-TiO{sub 2} thin films deposited by magnetron sputtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drogowska, K.; Institute of Materials Science, Technische Universitaet Darmstadt, Petersenstrasse 23, 64287 Darmstadt; Tarnawski, Z., E-mail: tarnawsk@agh.edu.pl

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer The single-, bi- and tri-layered films of Ti-TiO{sub 2} deposited onto Si(1 1 1) substrates. Black-Right-Pointing-Pointer Three methods RBS, XRR, optical reflectometer were used. Black-Right-Pointing-Pointer The real thickness of each layer was smaller than 50 nm. Black-Right-Pointing-Pointer Ti and TiO{sub 2} film-densities were slightly lower than the corresponding bulk values. -- Abstract: Single-, bi- and tri-layered films of Ti-TiO{sub 2} system were deposited by d.c. pulsed magnetron sputtering from metallic Ti target in an inert Ar or reactive Ar + O{sub 2} atmosphere. The nominal thickness of each layer was 50 nm. The chemical composition and its depthmore » profile were determined by Rutherford backscattering spectroscopy (RBS). Crystallographic structure was analysed by means of X-ray diffraction (XRD) at glancing incidence. X-ray reflectometry (XRR) was used as a complementary method for the film thickness and density evaluation. Modelling of the optical reflectivity spectra of Ti-TiO{sub 2} thin films deposited onto Si(1 1 1) substrates provided an independent estimate of the layer thickness. The combined analysis of RBS, XRR and reflectivity spectra indicated the real thickness of each layer less than 50 nm with TiO{sub 2} film density slightly lower than the corresponding bulk value. Scanning Electron Microscopy (SEM) cross-sectional images revealed the columnar growth of TiO{sub 2} layers. Thickness estimated directly from SEM studies was found to be in a good agreement with the results of RBS, XRR and reflectivity spectra.« less

  1. The detection of carbon dioxide leaks using quasi-tomographic laser absorption spectroscopy measurements in variable wind

    DOE PAGES

    Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...

    2016-04-13

    Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less

  2. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  3. Electromagnetic wave scattering from rough terrain

    NASA Astrophysics Data System (ADS)

    Papa, R. J.; Lennon, J. F.; Taylor, R. L.

    1980-09-01

    This report presents two aspects of a program designed to calculate electromagnetic scattering from rough terrain: (1) the use of statistical estimation techniques to determine topographic parameters and (2) the results of a single-roughness-scale scattering calculation based on those parameters, including comparison with experimental data. In the statistical part of the present calculation, digitized topographic maps are used to generate data bases for the required scattering cells. The application of estimation theory to the data leads to the specification of statistical parameters for each cell. The estimated parameters are then used in a hypothesis test to decide on a probability density function (PDF) that represents the height distribution in the cell. Initially, the formulation uses a single observation of the multivariate data. A subsequent approach involves multiple observations of the heights on a bivariate basis, and further refinements are being considered. The electromagnetic scattering analysis, the second topic, calculates the amount of specular and diffuse multipath power reaching a monopulse receiver from a pulsed beacon positioned over a rough Earth. The program allows for spatial inhomogeneities and multiple specular reflection points. The analysis of shadowing by the rough surface has been extended to the case where the surface heights are distributed exponentially. The calculated loss of boresight pointing accuracy attributable to diffuse multipath is then compared with the experimental results. The extent of the specular region, the use of localized height variations, and the effect of the azimuthal variation in power pattern are all assessed.

  4. Autonomous Sun-Direction Estimation Using Partially Underdetermined Coarse Sun Sensor Configurations

    NASA Astrophysics Data System (ADS)

    O'Keefe, Stephen A.

    In recent years there has been a significant increase in interest in smaller satellites as lower cost alternatives to traditional satellites, particularly with the rise in popularity of the CubeSat. Due to stringent mass, size, and often budget constraints, these small satellites rely on making the most of inexpensive hardware components and sensors, such as coarse sun sensors (CSS) and magnetometers. More expensive high-accuracy sun sensors often combine multiple measurements, and use specialized electronics, to deterministically solve for the direction of the Sun. Alternatively, cosine-type CSS output a voltage relative to the input light and are attractive due to their very low cost, simplicity to manufacture, small size, and minimal power consumption. This research investigates using coarse sun sensors for performing robust attitude estimation in order to point a spacecraft at the Sun after deployment from a launch vehicle, or following a system fault. As an alternative to using a large number of sensors, this thesis explores sun-direction estimation techniques with low computational costs that function well with underdetermined sets of CSS. Single-point estimators are coupled with simultaneous nonlinear control to achieve sun-pointing within a small percentage of a single orbit despite the partially underdetermined nature of the sensor suite. Leveraging an extensive analysis of the sensor models involved, sequential filtering techniques are shown to be capable of estimating the sun-direction to within a few degrees, with no a priori attitude information and using only CSS, despite the significant noise and biases present in the system. Detailed numerical simulations are used to compare and contrast the performance of the five different estimation techniques, with and without rate gyro measurements, their sensitivity to rate gyro accuracy, and their computation time. One of the key concerns with reducing the number of CSS is sensor degradation and failure. In this thesis, a Modified Rodrigues Parameter based CSS calibration filter suitable for autonomous on-board operation is developed. The sensitivity of this method's accuracy to the available Earth albedo data is evaluated and compared to the required computational effort. The calibration filter is expanded to perform sensor fault detection, and promising results are shown for reduced resolution albedo models. All of the methods discussed provide alternative attitude, determination, and control system algorithms for small satellite missions looking to use inexpensive, small sensors due to size, power, or budget limitations.

  5. A Preliminary Flight Investigation of Formation Flight for Drag Reduction on the C-17 Aircraft

    NASA Technical Reports Server (NTRS)

    Pahle, Joe; Berger, Dave; Venti, Michael W.; Faber, James J.; Duggan, Chris; Cardinal, Kyle

    2012-01-01

    Many theoretical and experimental studies have shown that aircraft flying in formation could experience significant reductions in fuel use compared to solo flight. To date, formation flight for aerodynamic benefit has not been thoroughly explored in flight for large transport-class vehicles. This paper summarizes flight data gathered during several two ship, C-17 formation flights at a single flight condition of 275 knots, at 25,000 ft MSL. Stabilized test points were flown with the trail aircraft at 1,000 and 3,000 ft aft of the lead aircraft at selected crosstrack and vertical offset locations within the estimated area of influence of the vortex generated by the lead aircraft. Flight data recorded at test points within the vortex from the lead aircraft are compared to data recorded at tare flight test points outside of the influence of the vortex. Since drag was not measured directly, reductions in fuel flow and thrust for level flight are used as a proxy for drag reduction. Estimated thrust and measured fuel flow reductions were documented at several trail test point locations within the area of influence of the leads vortex. The maximum average fuel flow reduction was approximately 7-8%, compared to the tare points flown before and after the test points. Although incomplete, the data suggests that regions with fuel flow and thrust reduction greater than 10% compared to the tare test points exist within the vortex area of influence.

  6. Pharmacokinetics of lacosamide and omeprazole coadministration in healthy volunteers: results from a phase I, randomized, crossover trial.

    PubMed

    Cawello, Willi; Mueller-Voessing, Christa; Fichtner, Andreas

    2014-05-01

    The antiepileptic drug lacosamide has a low potential for drug-drug interactions, but is a substrate and moderate inhibitor of the cytochrome P450 (CYP) enzyme CYP2C19. This phase I, randomized, open-label, two-way crossover trial evaluated the pharmacokinetic effects of lacosamide and omeprazole coadministration. Healthy, White, male volunteers (n = 36) who were not poor metabolizers of CYP2C19 were randomized to treatment A (single-dose 40 mg omeprazole on days 1 and 8 together with 6 days of multiple-dose lacosamide [200-600 mg/day] on days 3-8) and treatment B (single doses of 300 mg lacosamide on days 1 and 8 with 7 days of 40 mg/day omeprazole on days 3-9) in pseudorandom order, separated by a ≥ 7-day washout period. Area under the concentration-time curve (AUC) and peak concentration (C(max)) were the primary pharmacokinetic parameters measured for lacosamide or omeprazole administered alone (reference) or in combination (test). Bioequivalence was determined if the 90 % confidence interval (CI) of the ratio (test/reference) fell within the acceptance range of 0.8-1.25. The point estimates (90 % CI) of the ratio of omeprazole + lacosamide coadministered versus omeprazole alone for AUC (1.098 [0.996-1.209]) and C(max) (1.105 [0.979-1.247]) fell within the acceptance range for bioequivalence. The point estimates (90 % CI) of the ratio of lacosamide + omeprazole coadministration versus lacosamide alone also fell within the acceptance range for bioequivalence (AUC 1.133 [1.102-1.165]); C(max) 0.996 (0.947-1.047). Steady-state lacosamide did not influence omeprazole single-dose pharmacokinetics, and multiple-dose omeprazole did not influence lacosamide single-dose pharmacokinetics.

  7. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  8. Quantum criticality of a spin-1 XY model with easy-plane single-ion anisotropy via a two-time Green function approach avoiding the Anderson-Callen decoupling

    NASA Astrophysics Data System (ADS)

    Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.

    2016-04-01

    In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.

  9. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  10. Effective population sizes of a major vector of human diseases, Aedes aegypti.

    PubMed

    Saarman, Norah P; Gloria-Soria, Andrea; Anderson, Eric C; Evans, Benjamin R; Pless, Evlyn; Cosme, Luciano V; Gonzalez-Acosta, Cassandra; Kamgang, Basile; Wesson, Dawn M; Powell, Jeffrey R

    2017-12-01

    The effective population size ( N e ) is a fundamental parameter in population genetics that determines the relative strength of selection and random genetic drift, the effect of migration, levels of inbreeding, and linkage disequilibrium. In many cases where it has been estimated in animals, N e is on the order of 10%-20% of the census size. In this study, we use 12 microsatellite markers and 14,888 single nucleotide polymorphisms (SNPs) to empirically estimate N e in Aedes aegypti , the major vector of yellow fever, dengue, chikungunya, and Zika viruses. We used the method of temporal sampling to estimate N e on a global dataset made up of 46 samples of Ae. aegypti that included multiple time points from 17 widely distributed geographic localities. Our N e estimates for Ae. aegypti fell within a broad range (~25-3,000) and averaged between 400 and 600 across all localities and time points sampled. Adult census size (N c ) estimates for this species range between one and five thousand, so the N e / N c ratio is about the same as for most animals. These N e values are lower than estimates available for other insects and have important implications for the design of genetic control strategies to reduce the impact of this species of mosquito on human health.

  11. Small field depth dose profile of 6 MV photon beam in a simple air-water heterogeneity combination: A comparison between anisotropic analytical algorithm dose estimation with thermoluminescent dosimeter dose measurement.

    PubMed

    Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin

    2017-01-01

    To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.

  12. The detrimental influence of attention on time-to-contact perception.

    PubMed

    Baurès, Robin; Balestra, Marianne; Rosito, Maxime; VanRullen, Rufin

    2018-04-23

    To which extent is attention necessary to estimate the time-to-contact (TTC) of a moving object, that is, determining when the object will reach a specific point? While numerous studies have aimed at determining the visual cues and gaze strategy that allow this estimation, little is known about if and how attention is involved or required in this process. To answer this question, we carried out an experiment in which the participants estimated the TTC of a moving ball, either alone (single-task condition) or concurrently with a Rapid Serial Visual Presentation task embedded within the ball (dual-task condition). The results showed that participants had a better estimation when attention was driven away from the TTC task. This suggests that drawing attention away from the TTC estimation limits cognitive interference, intrusion of knowledge, or expectations that significantly modify the visually-based TTC estimation, and argues in favor of a limited attention to correctly estimate the TTC.

  13. Translational Research for Occupational Therapy: Using SPRE in Hippotherapy for Children with Developmental Disabilities.

    PubMed

    Weissman-Miller, Deborah; Miller, Rosalie J; Shotwell, Mary P

    2017-01-01

    Translational research is redefined in this paper using a combination of methods in statistics and data science to enhance the understanding of outcomes and practice in occupational therapy. These new methods are applied, using larger data and smaller single-subject data, to a study in hippotherapy for children with developmental disabilities (DD). The Centers for Disease Control and Prevention estimates DD affects nearly 10 million children, aged 2-19, where diagnoses may be comorbid. Hippotherapy is defined here as a treatment strategy in occupational therapy using equine movement to achieve functional outcomes. Semiparametric ratio estimator (SPRE), a single-subject statistical and small data science model, is used to derive a "change point" indicating where the participant adapts to treatment, from which predictions are made. Data analyzed here is from an institutional review board approved pilot study using the Hippotherapy Evaluation and Assessment Tool measure, where outcomes are given separately for each of four measured domains and the total scores of each participant. Analysis with SPRE, using statistical methods to predict a "change point" and data science graphical interpretations of data, shows the translational comparisons between results from larger mean values and the very different results from smaller values for each HEAT domain in terms of relationships and statistical probabilities.

  14. REQUEST: A Recursive QUEST Algorithm for Sequential Attitude Determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1996-01-01

    In order to find the attitude of a spacecraft with respect to a reference coordinate system, vector measurements are taken. The vectors are pairs of measurements of the same generalized vector, taken in the spacecraft body coordinates, as well as in the reference coordinate system. We are interested in finding the best estimate of the transformation between these coordinate system.s The algorithm called QUEST yields that estimate where attitude is expressed by a quarternion. Quest is an efficient algorithm which provides a least squares fit of the quaternion of rotation to the vector measurements. Quest however, is a single time point (single frame) batch algorithm, thus measurements that were taken at previous time points are discarded. The algorithm presented in this work provides a recursive routine which considers all past measurements. The algorithm is based on on the fact that the, so called, K matrix, one of whose eigenvectors is the sought quaternion, is linerly related to the measured pairs, and on the ability to propagate K. The extraction of the appropriate eigenvector is done according to the classical QUEST algorithm. This stage, however, can be eliminated, and the computation simplified, if a standard eigenvalue-eigenvector solver algorithm is used. The development of the recursive algorithm is presented and illustrated via a numerical example.

  15. The effect of social desirability trait on self-reported dietary measures among multi-ethnic female health center employees.

    PubMed

    Hébert, J R; Peterson, K E; Hurley, T G; Stoddard, A M; Cohen, N; Field, A E; Sorensen, G

    2001-08-01

    To evaluate the effect of social desirability trait, the tendency to respond in a manner consistent with societal expectations, on self-reported fruit, vegetable, and macronutrient intake. A 61-item food frequency questionnaire (FFQ), 7-item fruit and vegetable screener, and a single question on combined fruit and vegetable intake were completed by 132 female employees at five health centers in eastern Massachusetts. Intake of fruit and vegetables derived from all three methods and macronutrients from the FFQ were fit as dependent variables in multiple linear regression models (overall and by race/ethnicity and education); independent variables included 3-day mean intakes derived from 24-hour recalls (24HR) and score on the 33-point Marlowe-Crowne Social Desirability scale (the regression coefficient for which reflects its effect on estimates of dietary intake based on the comparison method relative to 24HR). Results are based on the 93 women with complete data and FFQ-derived caloric intake between 450 and 4500 kcal/day. In women with college education, FFQ-derived estimates of total caloric were associated with under-reporting by social desirability trait (e.g., the regression coefficient for total caloric intake was -23.6 kcal/day/point in that group versus 36.1 kcal/day/point in women with education less than college) (difference = 59.7 kcal/day/point, 95% confidence interval (CI) = 13.2, 106.2). Except for the single question on which women with college education tended to under-report (difference =.103 servings/day/point, 95% CI = 0.003, 0.203), there was no association of social desirability trait with self-reported fruit and vegetable intake. The effect of social desirability trait on FFQ reports of macronutrient intake appeared to differ by education, but not by ethnicity or race. The results of this study may have important implications for epidemiologic studies of diet and health in women.

  16. Identification of boiler inlet transfer functions and estimation of system parameters

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function of the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  17. Optical fiber biocompatible sensors for monitoring selective treatment of tumors via thermal ablation

    NASA Astrophysics Data System (ADS)

    Tosi, Daniele; Poeggel, Sven; Dinesh, Duraibabu B.; Macchi, Edoardo G.; Gallati, Mario; Braschi, Giovanni; Leen, Gabriel; Lewis, Elfed

    2015-09-01

    Thermal ablation (TA) is an interventional procedure for selective treatment of tumors, that results in low-invasive outpatient care. The lack of real-time control of TA is one of its main weaknesses. Miniature and biocompatible optical fiber sensors are applied to achieve a dense, multi-parameter monitoring, that can substantially improve the control of TA. Ex vivo measurements are reported performed on porcine liver tissue, to reproduce radiofrequency ablation of hepatocellular carcinoma. Our measurement campaign has a two-fold focus: (1) dual pressure-temperature measurement with a single probe; (2) distributed thermal measurement to estimate point-by-point cells mortality.

  18. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  19. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. The digital implementation of control compensators: The coefficient wordlength issue

    NASA Technical Reports Server (NTRS)

    Moroney, P.; Willsky, A. S.; Houpt, P. K.

    1979-01-01

    There exists a number of mathematical procedures for designing discrete-time compensators. However, the digital implementation of these designs, with a microprocessor for example, has not received nearly as thorough an investigation. The finite-precision nature of the digital hardware makes it necessary to choose an algorithm (computational structure) that will perform 'well-enough' with regard to the initial objectives of the design. This paper describes a procedure for estimating the required fixed-point coefficient wordlength for any given computational structure for the implementation of a single-input single-output LOG design. The results are compared to the actual number of bits necessary to achieve a specified performance index.

  1. The case for 6-component ground motion observations in planetary seismology

    NASA Astrophysics Data System (ADS)

    Joshi, Rakshit; van Driel, Martin; Donner, Stefanie; Nunn, Ceri; Wassermann, Joachim; Igel, Heiner

    2017-04-01

    The imminent INSIGHT mission will place a single seismic station on Mars to learn more about the structure of the Martian interior. Due to cost and difficulty, only single stations are currently feasible for planetary missions. We show that future single station missions should also measure rotational ground motions, in addition to the classic 3 components of translational motion. The joint, collocated, 6 component (6C) observations offer access to additional information that can otherwise only be obtained through seismic array measurements or are associated with large uncertainties. An example is the access to local phase velocity information from measurements of amplitude ratios of translations and rotations. When surface waves are available, this implies (in principle) that 1D velocity models can be estimated from Love wave dispersion curves. In addition, rotational ground motion observations can distinguish between Love and Rayleigh waves as well as S and P type motions. Wave propagation directions can be estimated by maximizing (or minimizing) coherence between translational and rotational motions. In combination with velocity-depth estimates, locations of seismic sources can be determined from a single station with little or no prior knowledge of the velocity structure. We demonstrate these points with both theoretical and real data examples using the vertical component of motion from ring laser recordings at Wettzell and all components of motion from the ROMY ring near Munich. Finally, we present the current state of technology concerning portable rotation sensors and discuss the relevance to planetary seismology.

  2. Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.

    PubMed

    Kappel, S; Boulyga, S F; Dorta, L; Günther, D; Hattendorf, B; Koffler, D; Laaha, G; Leisch, F; Prohaska, T

    2013-03-01

    Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e. 'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated. The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U isotope ratios of particles deposited on the NUSIMEP-7 test samples.

  3. Real-time estimation of ionospheric delay using GPS measurements

    NASA Astrophysics Data System (ADS)

    Lin, Lao-Sheng

    1997-12-01

    When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)

  4. Single-case experimental design yielded an effect estimate corresponding to a randomized controlled trial.

    PubMed

    Shadish, William R; Rindskopf, David M; Boyajian, Jonathan G

    2016-08-01

    We reanalyzed data from a previous randomized crossover design that administered high or low doses of intravenous immunoglobulin (IgG) to 12 patients with hypogammaglobulinaemia over 12 time points, with crossover after time 6. The objective was to see if results corresponded when analyzed as a set of single-case experimental designs vs. as a usual randomized controlled trial (RCT). Two blinded statisticians independently analyzed results. One analyzed the RCT comparing mean outcomes of group A (high dose IgG) to group B (low dose IgG) at the usual trial end point (time 6 in this case). The other analyzed all 12 time points for the group B patients as six single-case experimental designs analyzed together in a Bayesian nonlinear framework. In the randomized trial, group A [M = 794.93; standard deviation (SD) = 90.48] had significantly higher serum IgG levels at time six than group B (M = 283.89; SD = 71.10) (t = 10.88; df = 10; P < 0.001), yielding a mean difference of MD = 511.05 [standard error (SE) = 46.98]. For the single-case experimental designs, the effect from an intrinsically nonlinear regression was also significant and comparable in size with overlapping confidence intervals: MD = 495.00, SE = 54.41, and t = 495.00/54.41 = 9.10. Subsequent exploratory analyses indicated that how trend was modeled made a difference to these conclusions. The results of single-case experimental designs accurately approximated results from an RCT, although more work is needed to understand the conditions under which this holds. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. A benchmark theoretical study of the electronic ground state and of the singlet-triplet split of benzene and linear acenes

    NASA Astrophysics Data System (ADS)

    Hajgató, B.; Szieberth, D.; Geerlings, P.; De Proft, F.; Deleuze, M. S.

    2009-12-01

    A benchmark theoretical study of the electronic ground state and of the vertical and adiabatic singlet-triplet (ST) excitation energies of benzene (n =1) and n-acenes (C4n+2H2n+4) ranging from naphthalene (n =2) to heptacene (n =7) is presented, on the ground of single- and multireference calculations based on restricted or unrestricted zero-order wave functions. High-level and large scale treatments of electronic correlation in the ground state are found to be necessary for compensating giant but unphysical symmetry-breaking effects in unrestricted single-reference treatments. The composition of multiconfigurational wave functions, the topologies of natural orbitals in symmetry-unrestricted CASSCF calculations, the T1 diagnostics of coupled cluster theory, and further energy-based criteria demonstrate that all investigated systems exhibit a A1g singlet closed-shell electronic ground state. Singlet-triplet (S0-T1) energy gaps can therefore be very accurately determined by applying the principles of a focal point analysis onto the results of a series of single-point and symmetry-restricted calculations employing correlation consistent cc-pVXZ basis sets (X=D, T, Q, 5) and single-reference methods [HF, MP2, MP3, MP4SDQ, CCSD, CCSD(T)] of improving quality. According to our best estimates, which amount to a dual extrapolation of energy differences to the level of coupled cluster theory including single, double, and perturbative estimates of connected triple excitations [CCSD(T)] in the limit of an asymptotically complete basis set (cc-pV∞Z), the S0-T1 vertical excitation energies of benzene (n =1) and n-acenes (n =2-7) amount to 100.79, 76.28, 56.97, 40.69, 31.51, 22.96, and 18.16 kcal/mol, respectively. Values of 87.02, 62.87, 46.22, 32.23, 24.19, 16.79, and 12.56 kcal/mol are correspondingly obtained at the CCSD(T)/cc-pV∞Z level for the S0-T1 adiabatic excitation energies, upon including B3LYP/cc-PVTZ corrections for zero-point vibrational energies. In line with the absence of Peierls distortions, extrapolations of results indicate a vanishingly small S0-T1 energy gap of 0 to ˜4 kcal/mol (˜0.17 eV) in the limit of an infinitely large polyacene.

  6. Integrating health profile with survival for quality of life assessment.

    PubMed

    Hwang, Jing-Shiang; Wang, Jung-Der

    2004-02-01

    In cohort studies or clinical trials, measurements of quality of life (QoL) were averaged across available individuals for each group at given points in time to produce single measures for comparisons. However, estimates of these single measures may be severely biased if substantial mortality occurs over time. The objective of this study is to develop a method that integrates QoL measurement and survival for long-term evaluation of health services. We defined a mean QoL score function over time for an index population as the average QoL score of all individuals both alive and dead at each time point in the population. While a living subject's QoL can be assessed by asking one's subjective preference, the score of a decedent can be assigned a fixed value depending on the specific facet on health profile. The mean QoL score function over time is reduced to a single measure of expected cumulative QoL score, which is the area under the curve of mean QoL score function over a given time interval and can be estimated by taking a random sample from a cross-sectional survey. For the QoL score function to be extrapolated to life-long, it requires the assumption that the disease causes premature death or a long-term moderate impairment of QoL. We provided methods and computer programs for estimating mean QoL score functions and the reduced single measures for use in comparisons. A cohort of 779 breast cancer patients from Chiangmai, Thailand were followed for 12 years to demonstrate the proposed methods. The data included the 12-year complete survival records and QoL scores on 233 patients collected from a cross-sectional survey using WHOQOL questionnaire and standard gamble method. The expected cumulative QoL scores using utility and psychometric scales were compared among patients in four groups of clinical stages in this cohort for time from onset up to 12 years and life-long. We conclude that such an integration of QoL measurement with survival can be useful for the evaluation of health service and clinical decision.

  7. New Primary Dew-Point Generators at HMI/FSB-LPM in the Range from -70 °C to +60 °C

    NASA Astrophysics Data System (ADS)

    Zvizdic, Davor; Heinonen, Martti; Sestan, Danijel

    2012-09-01

    To extend the dew-point range and to improve the uncertainties of the humidity scale realization at HMI/FSB-LPM, new primary low- and high-range dew-point generators were developed and implemented in cooperation with MIKES, in 2009 through EUROMET Project No. 912. The low-range saturator is designed for primary realization of the dew-point temperature scale from -70 °C to + 5 °C, while the high-range saturator covers the range from 1 °C to 60 °C. The system is designed as a single-pressure, single-pass dew-point generator. MIKES designed and constructed both the saturators to be implemented in dew-point calibration systems at LPM. The LPM took care of purchasing and adapting liquid baths, of implementing the temperature and pressure measurement equipment appropriate for use in the systems, and development of gas preparation and flow control systems as well as of the computer-based automated data acquisition. The principle and the design of the generator are described in detail and schematically depicted. The tests were performed at MIKES to investigate how close both the saturators are to an ideal saturator. Results of the tests show that both the saturators are efficient enough for a primary realization of the dew-point temperature scale from -70 °C to + 60 °C, in the specified flow-rate ranges. The estimated standard uncertainties due to the non-ideal saturation efficiency are between 0.02 °C and 0.05 °C.

  8. The hepatitis C cascade of care: identifying priorities to improve clinical outcomes.

    PubMed

    Linas, Benjamin P; Barter, Devra M; Leff, Jared A; Assoumou, Sabrina A; Salomon, Joshua A; Weinstein, Milton C; Kim, Arthur Y; Schackman, Bruce R

    2014-01-01

    As highly effective hepatitis C virus (HCV) therapies emerge, data are needed to inform the development of interventions to improve HCV treatment rates. We used simulation modeling to estimate the impact of loss to follow-up on HCV treatment outcomes and to identify intervention strategies likely to provide good value for the resources invested in them. We used a Monte Carlo state-transition model to simulate a hypothetical cohort of chronically HCV-infected individuals recently screened positive for serum HCV antibody. We simulated four hypothetical intervention strategies (linkage to care; treatment initiation; integrated case management; peer navigator) to improve HCV treatment rates, varying efficacies and costs, and identified strategies that would most likely result in the best value for the resources required for implementation. Sustained virologic responses (SVRs), life expectancy, quality-adjusted life expectancy (QALE), costs from health system and program implementation perspectives, and incremental cost-effectiveness ratios (ICERs). We estimate that imperfect follow-up reduces the real-world effectiveness of HCV therapies by approximately 75%. In the base case, a modestly effective hypothetical peer navigator program maximized the number of SVRs and QALE, with an ICER compared to the next best intervention of $48,700/quality-adjusted life year. Hypothetical interventions that simultaneously addressed multiple points along the cascade provided better outcomes and more value for money than less costly interventions targeting single steps. The 5-year program cost of the hypothetical peer navigator intervention was $14.5 million per 10,000 newly diagnosed individuals. We estimate that imperfect follow-up during the HCV cascade of care greatly reduces the real-world effectiveness of HCV therapy. Our mathematical model shows that modestly effective interventions to improve follow-up would likely be cost-effective. Priority should be given to developing and evaluating interventions addressing multiple points along the cascade rather than options focusing solely on single points.

  9. The Hepatitis C Cascade of Care: Identifying Priorities to Improve Clinical Outcomes

    PubMed Central

    Linas, Benjamin P.; Barter, Devra M.; Leff, Jared A.; Assoumou, Sabrina A.; Salomon, Joshua A.; Weinstein, Milton C.; Kim, Arthur Y.; Schackman, Bruce R.

    2014-01-01

    Background As highly effective hepatitis C virus (HCV) therapies emerge, data are needed to inform the development of interventions to improve HCV treatment rates. We used simulation modeling to estimate the impact of loss to follow-up on HCV treatment outcomes and to identify intervention strategies likely to provide good value for the resources invested in them. Methods We used a Monte Carlo state-transition model to simulate a hypothetical cohort of chronically HCV-infected individuals recently screened positive for serum HCV antibody. We simulated four hypothetical intervention strategies (linkage to care; treatment initiation; integrated case management; peer navigator) to improve HCV treatment rates, varying efficacies and costs, and identified strategies that would most likely result in the best value for the resources required for implementation. Main measures Sustained virologic responses (SVRs), life expectancy, quality-adjusted life expectancy (QALE), costs from health system and program implementation perspectives, and incremental cost-effectiveness ratios (ICERs). Results We estimate that imperfect follow-up reduces the real-world effectiveness of HCV therapies by approximately 75%. In the base case, a modestly effective hypothetical peer navigator program maximized the number of SVRs and QALE, with an ICER compared to the next best intervention of $48,700/quality-adjusted life year. Hypothetical interventions that simultaneously addressed multiple points along the cascade provided better outcomes and more value for money than less costly interventions targeting single steps. The 5-year program cost of the hypothetical peer navigator intervention was $14.5 million per 10,000 newly diagnosed individuals. Conclusions We estimate that imperfect follow-up during the HCV cascade of care greatly reduces the real-world effectiveness of HCV therapy. Our mathematical model shows that modestly effective interventions to improve follow-up would likely be cost-effective. Priority should be given to developing and evaluating interventions addressing multiple points along the cascade rather than options focusing solely on single points. PMID:24842841

  10. Electrical level of defects in single-layer two-dimensional TiO2

    NASA Astrophysics Data System (ADS)

    Song, X. F.; Hu, L. F.; Li, D. H.; Chen, L.; Sun, Q. Q.; Zhou, P.; Zhang, D. W.

    2015-11-01

    The remarkable properties of graphene and transition metal dichalcogenides (TMDCs) have attracted increasing attention on two-dimensional materials, but the gate oxide, one of the key components of two-dimensional electronic devices, has rarely reported. We found the single-layer oxide can be used as the two dimensional gate oxide in 2D electronic structure, such as TiO2. However, the electrical performance is seriously influenced by the defects existing in the single-layer oxide. In this paper, a nondestructive and noncontact solution based on spectroscopic ellipsometry has been used to detect the defect states and energy level of single-layer TiO2 films. By fitting the Lorentz oscillator model, the results indicate the exact position of defect energy levels depends on the estimated band gap and the charge state of the point defects of TiO2.

  11. A Self-Affine Multi-Fractal Wave/Turbulence Discrimination Method Using Data from Single Point Fast Response Sensors in a Nocturnal Atmospheric Boundary Layer

    DTIC Science & Technology

    1992-04-10

    and passive tracer concentrations, and their cross correlations have generally been used to estimate the magnitude of dispersive atmospheric transport...of gravity waves and turbulence. . 10 III. METHODS .......... ........................ 12 A. Data .......... ........................ 12 B. Analysis ...unstable, i.e., strange. For waves or even limit cycle motion about fixed attractors, self-similarity does not occur. Pertinent to time series analysis , this

  12. Improved Methodology for Developing Cost Uncertainty Models for Naval Vessels

    DTIC Science & Technology

    2009-04-22

    Deegan , 2007). Risk cannot be assessed with a point estimate, as it represents a single value that serves as a best guess for the parameter to be...or stakeholders ( Deegan & Fields, 2007). This paper analyzes the current NAVSEA 05C Cruiser (CG(X)) probabilistic cost model including data...provided by Mr. Chris Deegan and his CG(X) analysts. The CG(X) model encompasses all factors considered for cost of the entire program, including

  13. Range-Depth Tracking of Sounds from a Single-Point Deployment by Exploiting the Deep-Water Sound Speed Minimum

    DTIC Science & Technology

    2014-09-30

    beaked whales , and shallow-diving mysticetes, with a focus on humpback whales . Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...obtained via large-aperture vertical array techniques (for humpback whales ). APPROACH The experimental approach used by this project uses data...m depth. The motivation behind these multiple deployments is that multiple techniques can be used to estimate humpback whale call position, and

  14. Fracture characterization and fracture-permeability estimation at the underground research laboratory in southeastern Manitoba, Canada

    USGS Publications Warehouse

    Paillet, Frederick L.

    1988-01-01

    Various conventional geophysical well logs were obtained in conjunction with acoustic tube-wave amplitude and experimental heat-pulse flowmeter measurements in two deep boreholes in granitic rocks on the Canadian shield in southeastern Manitoba. The objective of this study is the development of measurement techniques and data processing methods for characterization of rock volumes that might be suitable for hosting a nuclear waste repository. One borehole, WRA1, intersected several major fracture zones, and was suitable for testing quantitative permeability estimation methods. The other borehole, URL13, appeared to intersect almost no permeable fractures; it was suitable for testing methods for the characterization of rocks of very small permeability and uniform thermo-mechanical properties in a potential repository horizon. Epithermal neutron , acoustic transit time, and single-point resistance logs provided useful, qualitative indications of fractures in the extensively fractured borehole, WRA1. A single-point log indicates both weathering and the degree of opening of a fracture-borehole intersection. All logs indicate the large intervals of mechanically and geochemically uniform, unfractured granite below depths of 300 m in the relatively unfractured borehole, URL13. Some indications of minor fracturing were identified in that borehole, with one possible fracture at a depth of about 914 m, producing a major acoustic waveform anomaly. Comparison of acoustic tube-wave attenuation with models of tube-wave attenuation in infinite fractures of given aperture provide permeability estimates ranging from equivalent single-fractured apertures of less than 0.01 mm to apertures of > 0.5 mm. One possible fracture anomaly in borehole URL13 at a depth of about 914 m corresponds with a thin mafic dike on the core where unusually large acoustic contrast may have produced the observed waveform anomaly. No indications of naturally occurring flow existed in borehole URL13; however, flowmeter measurements indicated flow at < 0.05 L/min from the upper fracture zones in borehole WRA1 to deeper fractures at depths below 800 m. (Author 's abstract)

  15. A comparison of fisheries biological reference points estimated from temperature-specific multi-species and single-species climate-enhanced stock assessment models

    NASA Astrophysics Data System (ADS)

    Holsman, Kirstin K.; Ianelli, James; Aydin, Kerim; Punt, André E.; Moffitt, Elizabeth A.

    2016-12-01

    Multi-species statistical catch at age models (MSCAA) can quantify interacting effects of climate and fisheries harvest on species populations, and evaluate management trade-offs for fisheries that target several species in a food web. We modified an existing MSCAA model to include temperature-specific growth and predation rates and applied the modified model to three fish species, walleye pollock (Gadus chalcogrammus), Pacific cod (Gadus macrocephalus) and arrowtooth flounder (Atheresthes stomias), from the eastern Bering Sea (USA). We fit the model to data from 1979 through 2012, with and without trophic interactions and temperature effects, and use projections to derive single- and multi-species biological reference points (BRP and MBRP, respectively) for fisheries management. The multi-species model achieved a higher over-all goodness of fit to the data (i.e. lower negative log-likelihood) for pollock and Pacific cod. Variability from water temperature typically resulted in 5-15% changes in spawning, survey, and total biomasses, but did not strongly impact recruitment estimates or mortality. Despite this, inclusion of temperature in projections did have a strong effect on BRPs, including recommended yield, which were higher in single-species models for Pacific cod and arrowtooth flounder that included temperature compared to the same models without temperature effects. While the temperature-driven multi-species model resulted in higher yield MBPRs for arrowtooth flounder than the same model without temperature, we did not observe the same patterns in multi-species models for pollock and Pacific cod, where variability between harvest scenarios and predation greatly exceeded temperature-driven variability in yield MBRPs. Annual predation on juvenile pollock (primarily cannibalism) in the multi-species model was 2-5 times the annual harvest of adult fish in the system, thus predation represents a strong control on population dynamics that exceeds temperature-driven changes to growth and is attenuated through harvest-driven reductions in predator populations. Additionally, although we observed differences in spawning biomasses at the accepted biological catch (ABC) proxy between harvest scenarios and single- and multi-species models, discrepancies in spawning stock biomass estimates did not translate to large differences in yield. We found that multi-species models produced higher estimates of combined yield for aggregate maximum sustainable yield (MSY) targets than single species models, but were more conservative than single-species models when individual MSY targets were used, with the exception of scenarios where minimum biomass thresholds were imposed. Collectively our results suggest that climate and trophic drivers can interact to affect MBRPs, but for prey species with high predation rates, trophic- and management-driven changes may exceed direct effects of temperature on growth and predation. Additionally, MBRPs are not inherently more conservative than single-species BRPs. This framework provides a basis for the application of MSCAA models for tactical ecosystem-based fisheries management decisions under changing climate conditions.

  16. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  17. Generation of brain pseudo-CTs using an undersampled, single-acquisition UTE-mDixon pulse sequence and unsupervised clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Kuan-Hao; Hu, Lingzhi; Traughber, Melanie

    Purpose: MR-based pseudo-CT has an important role in MR-based radiation therapy planning and PET attenuation correction. The purpose of this study is to establish a clinically feasible approach, including image acquisition, correction, and CT formation, for pseudo-CT generation of the brain using a single-acquisition, undersampled ultrashort echo time (UTE)-mDixon pulse sequence. Methods: Nine patients were recruited for this study. For each patient, a 190-s, undersampled, single acquisition UTE-mDixon sequence of the brain was acquired (TE = 0.1, 1.5, and 2.8 ms). A novel method of retrospective trajectory correction of the free induction decay (FID) signal was performed based on point-spreadmore » functions of three external MR markers. Two-point Dixon images were reconstructed using the first and second echo data (TE = 1.5 and 2.8 ms). R2{sup ∗} images (1/T2{sup ∗}) were then estimated and were used to provide bone information. Three image features, i.e., Dixon-fat, Dixon-water, and R2{sup ∗}, were used for unsupervised clustering. Five tissue clusters, i.e., air, brain, fat, fluid, and bone, were estimated using the fuzzy c-means (FCM) algorithm. A two-step, automatic tissue-assignment approach was proposed and designed according to the prior information of the given feature space. Pseudo-CTs were generated by a voxelwise linear combination of the membership functions of the FCM. A low-dose CT was acquired for each patient and was used as the gold standard for comparison. Results: The contrast and sharpness of the FID images were improved after trajectory correction was applied. The mean of the estimated trajectory delay was 0.774 μs (max: 1.350 μs; min: 0.180 μs). The FCM-estimated centroids of different tissue types showed a distinguishable pattern for different tissues, and significant differences were found between the centroid locations of different tissue types. Pseudo-CT can provide additional skull detail and has low bias and absolute error of estimated CT numbers of voxels (−22 ± 29 HU and 130 ± 16 HU) when compared to low-dose CT. Conclusions: The MR features generated by the proposed acquisition, correction, and processing methods may provide representative clustering information and could thus be used for clinical pseudo-CT generation.« less

  18. An analysis of particle track effects on solid mammalian tissues

    NASA Technical Reports Server (NTRS)

    Todd, P.; Clarkson, T. W. (Principal Investigator)

    1992-01-01

    Relative biological effectiveness (RBE) and quality factor (Q) at extreme values of linear energy transfer (LET) have been determined on the basis of experiments with single-cell systems and specific tissue responses. In typical single-cell systems, each heavy particle (Ar or Fe) passes through a single cell or no cell. In experiments on animal tissues, however, each heavy particle passes through several cells, and the LET can exceed 200 keV micrometers-1 in every cell. In most laboratory animal tissue systems, however, only a small portion of the hit cells are capable of expressing the end-point being measured, such as cell killing, mutation or carcinogenesis. The following question was therefore addressed: do RBEs and Q factors derived from single-cell experiments properly account for the damage at high LET when multiple cells are hit by HZE tracks? A review is offered in which measured radiation effects and known tissue properties are combined to estimate on the one hand, the number of cells at risk, p3n, per track, where n is the number of cells per track based on tissue and organ geometry, and p3 is the probability that a cell in the track is capable of expressing the experimental end-point. On the other hand, the tissue and single-cell responses are compared by determining the ratio RBE in tissue/RBE in corresponding single cells. Experimental data from the literature indicate that tissue RBEs at very high LET (Fe and Ar ions) are higher than corresponding single-cell RBEs, especially in tissues in which p3n is high.

  19. Identifying, Assessing, and Mitigating Risk of Single-Point Inspections on the Space Shuttle Reusable Solid Rocket Motor

    NASA Technical Reports Server (NTRS)

    Greenhalgh, Phillip O.

    2004-01-01

    In the production of each Space Shuttle Reusable Solid Rocket Motor (RSRM), over 100,000 inspections are performed. ATK Thiokol Inc. reviewed these inspections to ensure a robust inspection system is maintained. The principal effort within this endeavor was the systematic identification and evaluation of inspections considered to be single-point. Single-point inspections are those accomplished on components, materials, and tooling by only one person, involving no other check. The purpose was to more accurately characterize risk and ultimately address and/or mitigate risk associated with single-point inspections. After the initial review of all inspections and identification/assessment of single-point inspections, review teams applied risk prioritization methodology similar to that used in a Process Failure Modes Effects Analysis to derive a Risk Prioritization Number for each single-point inspection. After the prioritization of risk, all single-point inspection points determined to have significant risk were provided either with risk-mitigating actions or rationale for acceptance. This effort gave confidence to the RSRM program that the correct inspections are being accomplished, that there is appropriate justification for those that remain as single-point inspections, and that risk mitigation was applied to further reduce risk of higher risk single-point inspections. This paper examines the process, results, and lessons learned in identifying, assessing, and mitigating risk associated with single-point inspections accomplished in the production of the Space Shuttle RSRM.

  20. Variation of ultrasound image lateral spectrum with assumed speed of sound and true scatterer density.

    PubMed

    Gyöngy, Miklós; Kollár, Sára

    2015-02-01

    One method of estimating sound speed in diagnostic ultrasound imaging consists of choosing the speed of sound that generates the sharpest image, as evaluated by the lateral frequency spectrum of the squared B-mode image. In the current work, simulated and experimental data on a typical (47 mm aperture, 3.3-10.0 MHz response) linear array transducer are used to investigate the accuracy of this method. A range of candidate speeds of sound (1240-1740 m/s) was used, with a true speed of sound of 1490 m/s in simulations and 1488 m/s in experiments. Simulations of single point scatterers and two interfering point scatterers at various locations with respect to each other gave estimate errors of 0.0-2.0%. Simulations and experiments of scatterer distributions with a mean scatterer spacing of at least 0.5 mm gave estimate errors of 0.1-4.0%. In the case of lower scatterer spacing, the speed of sound estimates become unreliable due to a decrease in contrast of the sharpness measure between different candidate speeds of sound. This suggests that in estimating speed of sound in tissue, the region of interest should be dominated by a few, sparsely spaced scatterers. Conversely, the decreasing sensitivity of the sharpness measure to speed of sound errors for higher scatterer concentrations suggests a potential method for estimating mean scatterer spacing. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Experimental aerodynamic performance of advanced 40 deg-swept 10-blade propeller model at Mach 0.6 to 0.85

    NASA Technical Reports Server (NTRS)

    Mitchell, Glenn A.

    1988-01-01

    A propeller designated as SR-6, designed with 40 deg of sweep and 10 blades to cruise at Mach 0.8 at an altitude of 10.7 km (35,000 ft), was tested in the NASA Lewis Research Center's 8- by 6-Foot Wind Tunnel. This propeller was one of a series of advanced single rotation propeller models designed and tested as part of the NASA Advanced Turboprop Project. Design-point net efficiency was almost constant to Mach 0.75 but fell above this speed more rapidly than that of any previously tested advanced propeller. Alternative spinners that further reduced the near-hub interblade Mach numbers and relieved the observed hub choking improved performance above Mach 0.75. One spinner attained estimated SR-6 Design-point net deficiencies of 80.6 percent at Mach 0.75 and 79.2 percent at Mach 0.8, higher than the measured performance of any previously tested advanced single-rotation propeller at these speeds.

  2. Quantum communications system with integrated photonic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordholt, Jane E.; Peterson, Charles Glen; Newell, Raymond Thorson

    Security is increased in quantum communication (QC) systems lacking a true single-photon laser source by encoding a transmitted optical signal with two or more decoy-states. A variable attenuator or amplitude modulator randomly imposes average photon values onto the optical signal based on data input and the predetermined decoy-states. By measuring and comparing photon distributions for a received QC signal, a single-photon transmittance is estimated. Fiber birefringence is compensated by applying polarization modulation. A transmitter can be configured to transmit in conjugate polarization bases whose states of polarization (SOPs) can be represented as equidistant points on a great circle on themore » Poincare sphere so that the received SOPs are mapped to equidistant points on a great circle and routed to corresponding detectors. Transmitters are implemented in quantum communication cards and can be assembled from micro-optical components, or transmitter components can be fabricated as part of a monolithic or hybrid chip-scale circuit.« less

  3. Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors

    NASA Astrophysics Data System (ADS)

    Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood

    2000-08-01

    Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.

  4. Rock falls from Glacier Point above Camp Curry, Yosemite National Park, California

    USGS Publications Warehouse

    Wieczorek, Gerald F.; Snyder, James B.

    1999-01-01

    A series of rock falls from the north face of Glacier Point above Camp Curry, Yosemite National Park, California, have caused reexamination of the rock-fall hazard because beginning in June, 1999 a system of cracks propagated through a nearby rock mass outlining a future potential rock fall. If the estimated volume of the potential rock fall fails as a single piece, there could be a risk from rock-fall impact and airborne rock debris to cabins in Camp Curry. The role of joint plane orientation and groundwater pressure in the fractured rock mass are discussed in light of the pattern of developing cracks and potential modes of failure.

  5. Evaluation of a high framerate multi-exposure laser speckle contrast imaging setup

    NASA Astrophysics Data System (ADS)

    Hultman, Martin; Fredriksson, Ingemar; Strömberg, Tomas; Larsson, Marcus

    2018-02-01

    We present a first evaluation of a new multi-exposure laser speckle contrast imaging (MELSCI) system for assessing spatial variations in the microcirculatory perfusion. The MELSCI system is based on a 1000 frames per second 1-megapixel camera connected to a field programmable gate arrays (FPGA) capable of producing MELSCI data in realtime. The imaging system is evaluated against a single point laser Doppler flowmetry (LDF) system during occlusionrelease provocations of the arm in five subjects. Perfusion is calculated from MELSCI data using current state-of-the-art inverse models. The analysis displayed a good agreement between measured and modeled data, with an average error below 6%. This strongly indicates that the applied model is capable of accurately describing the MELSCI data and that the acquired data is of high quality. Comparing readings from the occlusion-release provocation showed that the MELSCI perfusion was significantly correlated (R=0.83) to the single point LDF perfusion, clearly outperforming perfusion estimations based on a single exposure time. We conclude that the MELSCI system provides blood flow images of enhanced quality, taking us one step closer to a system that accurately can monitor dynamic changes in skin perfusion over a large area in real-time.

  6. Robust organelle size extractions from elastic scattering measurements of single cells (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.; Draham, Robert; Berger, Andrew J.

    2016-04-01

    The goal of this project is to estimate non-nuclear organelle size distributions in single cells by measuring angular scattering patterns and fitting them with Mie theory. Simulations have indicated that the large relative size distribution of organelles (mean:width≈2) leads to unstable Mie fits unless scattering is collected at polar angles less than 20 degrees. Our optical system has therefore been modified to collect angles down to 10 degrees. Initial validations will be performed on polystyrene bead populations whose size distributions resemble those of cell organelles. Unlike with the narrow bead distributions that are often used for calibration, we expect to see an order-of-magnitude improvement in the stability of the size estimates as the minimum angle decreases from 20 to 10 degrees. Scattering patterns will then be acquired and analyzed from single cells (EMT6 mouse cancer cells), both fixed and live, at multiple time points. Fixed cells, with no changes in organelle sizes over time, will be measured to determine the fluctuation level in estimated size distribution due to measurement imperfections alone. Subsequent measurements on live cells will determine whether there is a higher level of fluctuation that could be attributed to dynamic changes in organelle size. Studies on unperturbed cells are precursors to ones in which the effects of exogenous agents are monitored over time.

  7. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  8. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  9. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors

    PubMed Central

    Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco

    2013-01-01

    Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853

  11. Bootstrapping the (A1, A2) Argyres-Douglas theory

    NASA Astrophysics Data System (ADS)

    Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro

    2018-03-01

    We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.

  12. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  13. The effects of contraception on female poverty.

    PubMed

    Browne, Stephanie P; LaLumia, Sara

    2014-01-01

    Poverty rates are particularly high among households headed by single women, and childbirth is often the event preceding these households' poverty spells. This paper examines the relationship between legal access to the birth control pill and female poverty. We rely on exogenous cross-state variation in the year in which oral contraception became legally available to young, single women. Using census data from 1960 to 1990, we find that having legal access to the birth control pill by age 20 significantly reduces the probability that a woman is subsequently in poverty. We estimate that early legal access to oral contraception reduces female poverty by 0.5 percentage points, even when controlling for completed education, employment status, and household composition.

  14. Defect sensitive etching of hexagonal boron nitride single crystals

    NASA Astrophysics Data System (ADS)

    Edgar, J. H.; Liu, S.; Hoffman, T.; Zhang, Yichao; Twigg, M. E.; Bassim, Nabil D.; Liang, Shenglong; Khan, Neelam

    2017-12-01

    Defect sensitive etching (DSE) was developed to estimate the density of non-basal plane dislocations in hexagonal boron nitride (hBN) single crystals. The crystals employed in this study were precipitated by slowly cooling (2-4 °C/h) a nickel-chromium flux saturated with hBN from 1500 °C under 1 bar of flowing nitrogen. On the (0001) planes, hexagonal-shaped etch pits were formed by etching the crystals in a eutectic mixture of NaOH and KOH between 450 °C and 525 °C for 1-2 min. There were three types of pits: pointed bottom, flat bottom, and mixed shape pits. Cross-sectional transmission electron microscopy revealed that the pointed bottom etch pits examined were associated with threading dislocations. All of these dislocations had an a-type burgers vector (i.e., they were edge dislocations, since the line direction is perpendicular to the [ 2 11 ¯ 0 ]-type direction). The pit widths were much wider than the pit depths as measured by atomic force microscopy, indicating the lateral etch rate was much faster than the vertical etch rate. From an Arrhenius plot of the log of the etch rate versus the inverse temperature, the activation energy was approximately 60 kJ/mol. This work demonstrates that DSE is an effective method for locating threading dislocations in hBN and estimating their densities.

  15. MEASURING COLLISIONLESS DAMPING IN HELIOSPHERIC PLASMAS USING FIELD–PARTICLE CORRELATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, K. G.; Howes, G. G.

    2016-08-01

    An innovative field–particle correlation technique is proposed that uses single-point measurements of the electromagnetic fields and particle velocity distribution functions to investigate the net transfer of energy from fields to particles associated with the collisionless damping of turbulent fluctuations in weakly collisional plasmas, such as the solar wind. In addition to providing a direct estimate of the local rate of energy transfer between fields and particles, it provides vital new information about the distribution of that energy transfer in velocity space. This velocity-space signature can potentially be used to identify the dominant collisionless mechanism responsible for the damping of turbulentmore » fluctuations in the solar wind. The application of this novel field–particle correlation technique is illustrated using the simplified case of the Landau damping of Langmuir waves in an electrostatic 1D-1V Vlasov–Poisson plasma, showing that the procedure both estimates the local rate of energy transfer from the electrostatic field to the electrons and indicates the resonant nature of this interaction. Modifications of the technique to enable single-point spacecraft measurements of fields and particles to diagnose the collisionless damping of turbulent fluctuations in the solar wind are discussed, yielding a method with the potential to transform our ability to maximize the scientific return from current and upcoming spacecraft missions, such as the Magnetospheric Multiscale ( MMS ) and Solar Probe Plus missions.« less

  16. Prescription-drug-related risk in driving: comparing conventional and lasso shrinkage logistic regressions.

    PubMed

    Avalos, Marta; Adroher, Nuria Duran; Lagarde, Emmanuel; Thiessard, Frantz; Grandvalet, Yves; Contrand, Benjamin; Orriols, Ludivine

    2012-09-01

    Large data sets with many variables provide particular challenges when constructing analytic models. Lasso-related methods provide a useful tool, although one that remains unfamiliar to most epidemiologists. We illustrate the application of lasso methods in an analysis of the impact of prescribed drugs on the risk of a road traffic crash, using a large French nationwide database (PLoS Med 2010;7:e1000366). In the original case-control study, the authors analyzed each exposure separately. We use the lasso method, which can simultaneously perform estimation and variable selection in a single model. We compare point estimates and confidence intervals using (1) a separate logistic regression model for each drug with a Bonferroni correction and (2) lasso shrinkage logistic regression analysis. Shrinkage regression had little effect on (bias corrected) point estimates, but led to less conservative results, noticeably for drugs with moderate levels of exposure. Carbamates, carboxamide derivative and fatty acid derivative antiepileptics, drugs used in opioid dependence, and mineral supplements of potassium showed stronger associations. Lasso is a relevant method in the analysis of databases with large number of exposures and can be recommended as an alternative to conventional strategies.

  17. Statistics of concentrations due to single air pollution sources to be applied in numerical modelling of pollutant dispersion

    NASA Astrophysics Data System (ADS)

    Tumanov, Sergiu

    A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.

  18. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  19. CAP waveform estimation from the measured electrical bioimpedance values: Patient's heart rate variability analysis.

    PubMed

    Krivoshei, A; Uuetoa, H; Min, M; Annus, P; Uuetoa, T; Lamp, J

    2015-08-01

    The paper presents analysis of the generic transfer function (TF) between Electrical Bioimpedance (EBI) measured non-invasively on the wrist and Central Aortic Pressure (CAP) invasively measured at the aortic root. Influence of the Heart Rate (HR) variations on the generic TF and on reconstructed CAP waveforms is investigated. The HR variation analysis is provided on a single patient data to exclude inter-patient influences at the current research stage. A new approach for the generic TF estimating from a data ensemble is presented as well. Moreover, an influence of the cardiac period beginning point selection is analyzed and empirically optimal solution for its selection is proposed.

  20. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  1. Ashkin-Teller criticality and weak first-order behavior of the phase transition to a fourfold degenerate state in two-dimensional frustrated Ising antiferromagnets

    NASA Astrophysics Data System (ADS)

    Liu, R. M.; Zhuo, W. Z.; Chen, J.; Qin, M. H.; Zeng, M.; Lu, X. B.; Gao, X. S.; Liu, J.-M.

    2017-07-01

    We study the thermal phase transition of the fourfold degenerate phases (the plaquette and single-stripe states) in the two-dimensional frustrated Ising model on the Shastry-Sutherland lattice using Monte Carlo simulations. The critical Ashkin-Teller-like behavior is identified both in the plaquette phase region and the single-stripe phase region. The four-state Potts critical end points differentiating the continuous transitions from the first-order ones are estimated based on finite-size-scaling analyses. Furthermore, a similar behavior of the transition to the fourfold single-stripe phase is also observed in the anisotropic triangular Ising model. Thus, this work clearly demonstrates that the transitions to the fourfold degenerate states of two-dimensional Ising antiferromagnets exhibit similar transition behavior.

  2. A heating-superfusion platform technology for the investigation of protein function in single cells.

    PubMed

    Xu, Shijun; Ainla, Alar; Jardemark, Kent; Jesorka, Aldo; Jeffries, Gavin D M

    2015-01-06

    Here, we report on a novel approach for the study of single-cell intracellular enzyme activity at various temperatures, utilizing a localized laser heating probe in combination with a freely positionable microfluidic perfusion device. Through directed exposure of individual cells to the pore-forming agent α-hemolysin, we have controlled the membrane permeability, enabling targeted delivery of the substrate. Mildly permeabilized cells were exposed to fluorogenic substrates to monitor the activity of intracellular enzymes, while adjusting the local temperature surrounding the target cells, using an infrared laser heating system. We generated quantitative estimates for the intracellular alkaline phosphatase activity at five different temperatures in different cell lines, constructing temperature-response curves of enzymatic activity at the single-cell level. Enzymatic activity was determined rapidly after cell permeation, generating five-point temperature-response curves within just 200 s.

  3. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  4. Speed of sound estimation for thermal monitoring using an active ultrasound element during liver ablation therapy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Younsu; Audigier, Chloé; Dillow, Austin; Cheng, Alexis; Boctor, Emad M.

    2017-03-01

    Thermal monitoring for ablation therapy has high demands for preserving healthy tissues while removing malignant ones completely. Various methods have been investigated. However, exposure to radiation, cost-effectiveness, and inconvenience hinder the use of X-ray or MRI methods. Due to the non-invasiveness and real-time capabilities of ultrasound, it is widely used in intraoperative procedures. Ultrasound thermal monitoring methods have been developed for affordable monitoring in real-time. We propose a new method for thermal monitoring using an ultrasound element. By inserting a Lead-zirconate-titanate (PZT) element to generate the ultrasound signal in the liver tissues, the single travel time of flight is recorded from the PZT element to the ultrasound transducer. We detect the speed of sound change caused by the increase in temperature during ablation therapy. We performed an ex vivo experiment with liver tissues to verify the feasibility of our speed of sound estimation technique. The time of flight information is used in an optimization method to recover the speed of sound maps during the ablation, which are then converted into temperature maps. The result shows that the trend of temperature changes matches with the temperature measured at a single point. The estimation error can be decreased by using a proper curve linking the speed of sound to the temperature. The average error over time was less than 3 degrees Celsius for a bovine liver. The speed of sound estimation using a single PZT element can be used for thermal monitoring.

  5. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  6. Drug development costs when financial risk is measured using the Fama-French three-factor model.

    PubMed

    Vernon, John A; Golec, Joseph H; Dimasi, Joseph A

    2010-08-01

    In a widely cited article, DiMasi, Hansen, and Grabowski (2003) estimate the average pre-tax cost of bringing a new molecular entity to market. Their base case estimate, excluding post-marketing studies, was $802 million (in $US 2000). Strikingly, almost half of this cost (or $399 million) is the cost of capital (COC) used to fund clinical development expenses to the point of FDA marketing approval. The authors used an 11% real COC computed using the capital asset pricing model (CAPM). But the CAPM is a single factor risk model, and multi-factor risk models are the current state of the art in finance. Using the Fama-French three factor model we find that the cost of drug development to be higher than the earlier estimate. Copyright (c) 2009 John Wiley & Sons, Ltd.

  7. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  8. Attitude control system of the Delfi-n3Xt satellite

    NASA Astrophysics Data System (ADS)

    Reijneveld, J.; Choukroun, D.

    2013-12-01

    This work is concerned with the development of the attitude control algorithms that will be implemented on board of the Delfi-n3xt nanosatellite, which is to be launched in 2013. One of the mission objectives is to demonstrate Sun pointing and three axis stabilization. The attitude control modes and the associated algorithms are described. The control authority is shared between three body-mounted magnetorquers (MTQ) and three orthogonal reaction wheels. The attitude information is retrieved from Sun vector measurements, Earth magnetic field measurements, and gyro measurements. The design of the control is achieved as a trade between simplicity and performance. Stabilization and Sun pointing are achieved via the successive application of the classical Bdot control law and a quaternion feedback control. For the purpose of Sun pointing, a simple quaternion estimation scheme is implemented based on geometric arguments, where the need for a costly optimal filtering algorithm is alleviated, and a single line of sight (LoS) measurement is required - here the Sun vector. Beyond the three-axis Sun pointing mode, spinning Sun pointing modes are also described and used as demonstration modes. The three-axis Sun pointing mode requires reaction wheels and magnetic control while the spinning control modes are implemented with magnetic control only. In addition, a simple scheme for angular rates estimation using Sun vector and Earth magnetic measurements is tested in the case of gyro failures. The various control modes performances are illustrated via extensive simulations over several orbits time spans. The simulated models of the dynamical space environment, of the attitude hardware, and the onboard controller logic are using realistic assumptions. All control modes satisfy the minimal Sun pointing requirements allowed for power generation.

  9. Co-C and Pd-C Fixed Points for the Evaluation of Facilities and Scales Realization at INRIM and NMC

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Wang, L.; Girard, F.; Ang, S. H.

    2014-04-01

    Two hybrid cells for realizing the Co-C and Pd-C fixed points and constructed at Istituto Nazionale di Ricerca Metrologica (INRIM) were used for an evaluation of facilities and procedures adopted by INRIM and National Metrology Institute of Singapore (NMC) for the realization of the solid-liquid phase transitions of high-temperature fixed points and for determining their transition temperatures. Four different furnaces were used for the investigations, i.e., two single-zone furnaces, one of them of the direct-heating type, and two identical three-zone furnaces. The transition temperatures were measured at both institutes by adopting different procedures for realizing the radiation scales, i.e., at INRIM a scheme based on the extrapolation of fixed-point interpolated scales and an International Temperature Scale of 1990 (ITS-90) approach at NMC. The point of inflection (POI) of the melting curves was determined and assumed as a practical representation of the melting temperature. Different methods for deriving the POI were used, and differences as large as some hundredths of a kelvin were found with the different approaches. The POIs of the different melting curves were analyzed with respect to the different possible operative conditions with the aim of deriving reproducibility figures to improve the estimated uncertainty. As regard to the institutes inter-comparison, differences of 0.13 K and 0.29 K were found between INRIM and NMC determinations at the Co-C and Pd-C points, respectively. Such differences are compatible with the combined standard uncertainties of the comparison, which are estimated to be 0.33 K and 0.36 K at the Co-C and Pd-C points, respectively.

  10. Precise aircraft single-point positioning using GPS post-mission orbits and satellite clock corrections

    NASA Astrophysics Data System (ADS)

    Lachapelle, G.; Cannon, M. E.; Qiu, W.; Varner, C.

    1996-09-01

    Aircraft single point position accuracy is assessed through a comparison of the single point coordinates with corresponding DGPS-derived coordinates. The platform utilized for this evaluation is a Naval Air Warfare Center P-3 Orion aircraft. Data was collected over a period of about 40 hours, spread over six days, off Florida's East Coast in July 94, using DGPS reference stations in Jacksonville, FL, and Warminster, PA. The analysis of results shows that the consistency between aircraft single point and DGPS coordinates obtained in single point positioning mode and DGPS mode is about 1 m (rms) in latitude and longitude, and 2 m (rms) in height, with instantaneous errors of up to a few metres due to the effect of the ionosphere on the single point L1 solutions.

  11. The effects of prospective mate quality on investments in healthy body weight among single women.

    PubMed

    Harris, Matthew C; Cronin, Christopher J

    2017-02-01

    This paper examines how a single female's investment in healthy body weight is affected by the quality of single males in her marriage market. A principle concern in estimation is the presence of market-level unobserved heterogeneity that may be correlated with changes in single male quality, measured as earning potential. To address this concern, we employ a differencing strategy that normalizes the exercise behaviors of single women to those of their married counterparts. Our main results suggest that when potential mate quality in a marriage market decreases, single black women invest less in healthy body weight. For example, we find that a 10 percentage point increase in the proportion of low quality single black males leads to a 5-10% decrease in vigorous exercise taken by single black females. Results for single white women are qualitatively similar, but not consistent across specifications. These results highlight the relationship between male and female human capital acquisition that is driven by participation in the marriage market. Our results suggest that programs designed to improve the economic prospects of single males may yield positive externalities in the form of improved health behaviors, such as more exercise, particularly for single black females. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Methodological challenges for the evaluation of clinical effectiveness in the context of accelerated regulatory approval: an overview.

    PubMed

    Woolacott, Nerys; Corbett, Mark; Jones-Diette, Julie; Hodgson, Robert

    2017-10-01

    Regulatory authorities are approving innovative therapies with limited evidence. Although this level of data is sufficient for the regulator to establish an acceptable risk-benefit balance, it is problematic for downstream health technology assessment, where assessment of cost-effectiveness requires reliable estimates of effectiveness relative to existing clinical practice. Some key issues associated with a limited evidence base include using data, from nonrandomized studies, from small single-arm trials, or from single-center trials; and using surrogate end points. We examined these methodological challenges through a pragmatic review of the available literature. Methods to adjust nonrandomized studies for confounding are imperfect. The relative treatment effect generated from single-arm trials is uncertain and may be optimistic. Single-center trial results may not be generalizable. Surrogate end points, on average, overestimate treatment effects. Current methods for analyzing such data are limited, and effectiveness claims based on these suboptimal forms of evidence are likely to be subject to significant uncertainty. Assessments of cost-effectiveness, based on the modeling of such data, are likely to be subject to considerable uncertainty. This uncertainty must not be underestimated by decision makers: methods for its quantification are required and schemes to protect payers from the cost of uncertainty should be implemented. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  13. Three-Dimensional Registration for Handheld Profiling Systems Based on Multiple Shot Structured Light

    PubMed Central

    Ayaz, Shirazi Muhammad; Kim, Min Young

    2018-01-01

    In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552

  14. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  15. A national assessment of underground natural gas storage: identifying wells with designs likely vulnerable to a single-point-of-failure

    NASA Astrophysics Data System (ADS)

    Michanowicz, Drew R.; Buonocore, Jonathan J.; Rowland, Sebastian T.; Konschnik, Katherine E.; Goho, Shaun A.; Bernstein, Aaron S.

    2017-05-01

    The leak of processed natural gas (PNG) from October 2015 to February 2016 from the Aliso Canyon storage facility, near Los Angeles, California, was the largest single accidental release of greenhouse gases in US history. The Interagency Task Force on Natural Gas Storage Safety and California regulators recently recommended operators phase out single-point-of-failure (SPF) well designs. Here, we develop a national dataset of UGS well activity in the continental US to assess regulatory data availability and uncertainty, and to assess the prevalence of certain well design deficiencies including single-point-of-failure designs. We identified 14 138 active UGS wells associated with 317 active UGS facilities in 29 states using regulatory and company data. State-level wellbore datasets contained numerous reporting inconsistencies that limited data concatenation. We identified 2715 active UGS wells across 160 facilities that, like the failed well at Aliso Canyon, predated the storage facility, and therefore were not originally designed for gas storage. The majority (88%) of these repurposed wells are located in OH, MI, PA, NY, and WV. Repurposed wells have a median age of 74 years, and the 2694 repurposed wells constructed prior to 1979 are particularly likely to exhibit design-related deficiencies. An estimated 210 active repurposed wells were constructed before 1917—before cement zonal isolation methods were utilized. These wells are located in OH, PA, NY, and WV and represent the highest priority related to potential design deficiencies that could lead to containment loss. This national baseline assessment identifies regulatory data uncertainties, highlights a potentially widespread vulnerability of the natural gas supply chain, and can aid in prioritization and oversight for high-risk wells and facilities.

  16. A Guide to Making Stochastic and Single Point Predictions using the Cold Exposure Survival Model (CESM)

    DTIC Science & Technology

    2008-01-01

    required soldiers to traverse knee to neck deep 14°C water. Recently, the proliferation of wilderness activities such as mountain climbing, backcountry...Red Cross Cold Water Survival Curves (Figure 2). While useful as a “ rule of thumb” estimate of hypothermia survival, models such as Molnar’s [8] are...low body fat (e.g. body builders) are an exception to this rule . The advantage of having a little more mass can be demonstrated by CESM by

  17. On the Impact of Multi-GNSS Observations on Real-Time Precise Point Positioning Zenith Total Delay Estimates

    NASA Astrophysics Data System (ADS)

    Ding, Wenwu; Teferle, Norman; Kaźmierski, Kamil; Laurichesse, Denis; Yuan, Yunbin

    2017-04-01

    Observations from multiple Global Navigation Satellite System (GNSS) can improve the performance of real-time (RT) GNSS meteorology, in particular of the Zenith Total Delay (ZTD) estimates. RT ZTD estimates in combination with derived precipitable water vapour estimates can be used for weather now-casting and the tracking of severe weather events. While a number of published literature has already highlighted this positive development, in this study we describe an operational RT system for extracting ZTD using a modified version of the PPP-wizard (with PPP denoting Precise Point Positioning). Multi-GNSS, including GPS, GLONASS and Galileo, observation streams are processed using a RT PPP strategy based on RT satellite orbit and clock products from the Centre National d'Etudes Spatiales (CNES). A continuous experiment for 30 days was conducted, in which the RT observation streams of 20 globally distributed stations were processed. The initialization time and accuracy of the RT troposphere products using single and/or multi-system observations were evaluated. The effect of RT PPP ambiguity resolution was also evaluated. The results revealed that the RT troposphere products based on single system observations can fulfill the requirements of the meteorological application in now-casting systems. We noted that the GPS-only solution is better than the GLONASS-only solution in both initialization and accuracy. While the ZTD performance can be improved by applying RT PPP ambiguity resolution, the inclusion of observations from multiple GNSS has a more profound effect. Specifically, we saw that the ambiguity resolution is more effective in improving the accuracy, whereas the initialization process can be better accelerated by multi-GNSS observations. Combining all systems, RT troposphere products with an average accuracy of about 8 mm in ZTD were achieved after an initialization process of approximately 9 minutes, which supports the application of multi-GNSS observations and ambiguity resolution for RT meteorological applications.

  18. Joint estimation of vertical total electron content (VTEC) and satellite differential code biases (SDCBs) using low-cost receivers

    NASA Astrophysics Data System (ADS)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Hongxing; Li, Min

    2018-04-01

    Vertical total electron content (VTEC) parameters estimated using global navigation satellite system (GNSS) data are of great interest for ionosphere sensing. Satellite differential code biases (SDCBs) account for one source of error which, if left uncorrected, can deteriorate performance of positioning, timing and other applications. The customary approach to estimate VTEC along with SDCBs from dual-frequency GNSS data, hereinafter referred to as DF approach, consists of two sequential steps. The first step seeks to retrieve ionospheric observables through the carrier-to-code leveling technique. This observable, related to the slant total electron content (STEC) along the satellite-receiver line-of-sight, is biased also by the SDCBs and the receiver differential code biases (RDCBs). By means of thin-layer ionospheric model, in the second step one is able to isolate the VTEC, the SDCBs and the RDCBs from the ionospheric observables. In this work, we present a single-frequency (SF) approach, enabling the joint estimation of VTEC and SDCBs using low-cost receivers; this approach is also based on two steps and it differs from the DF approach only in the first step, where we turn to the precise point positioning technique to retrieve from the single-frequency GNSS data the ionospheric observables, interpreted as the combination of the STEC, the SDCBs and the biased receiver clocks at the pivot epoch. Our numerical analyses clarify how SF approach performs when being applied to GPS L1 data collected by a single receiver under both calm and disturbed ionospheric conditions. The daily time series of zenith VTEC estimates has an accuracy ranging from a few tenths of a TEC unit (TECU) to approximately 2 TECU. For 73-96% of GPS satellites in view, the daily estimates of SDCBs do not deviate, in absolute value, more than 1 ns from their ground truth values published by the Centre for Orbit Determination in Europe.

  19. Multi-transmitter multi-receiver null coupled systems for inductive detection and characterization of metallic objects

    NASA Astrophysics Data System (ADS)

    Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen

    2007-03-01

    Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.

  20. Molar heat capacity at constant volume of 1,1-difluoroethane (R152a) and 1,1,1-trifluoroethane (R143a) from the triple-point temperature to 345 k at pressure to 35 MPa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magee, J.W.

    1998-09-01

    Molar heat capacities at constant volume (C{sub v}) of 1,1-difluoroethane (R152a) and 1,1,1-trifluoroethane (R143a) have been measured with an adiabatic calorimeter. Temperatures ranged from their triple points to 345 K, and pressures up to 35 MPa. Measurements were conducted on the liquid in equilibrium with its vapor and on compressed liquid samples. The samples were of high purity, verified by chemical analysis of each fluid. For the samples, calorimetric results were obtained for two-phase (C{sub v}{sup (2)}), saturated-liquid (C{sub {sigma}} or C{sub x}{prime}), and single-phase (C{sub v}) molar heat capacities. The C{sub {sigma}} data were used to estimate vapor pressuresmore » for values less than 105 kPa by applying a thermodynamic relationship between the saturated liquid heat capacity and the temperature derivatives of the vapor pressure. The triple-point temperature and the enthalpy of fusion were also measured for each substance. The principal sources of uncertainty are the temperature rise measurement and the change-of-volume work adjustment. The expanded relative uncertainty (with a coverage factor k = 2 and thus a two-standard deviation estimate) for C{sub v} is estimated to be 0.7%, for C{sub v}{sup (2)} it is 0.5%, and for C{sub {sigma}} it is 0.7%.« less

  1. Autoregressive linear least square single scanning electron microscope image signal-to-noise ratio estimation.

    PubMed

    Sim, Kok Swee; NorHisham, Syafiq

    2016-11-01

    A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  2. Primary radiation damage of an FeCr alloy under pressure: Atomistic simulation

    NASA Astrophysics Data System (ADS)

    Tikhonchev, M. Yu.; Svetukhin, V. V.

    2017-05-01

    The primary radiation damage of a binary FeCr alloy deformed by applied mechanical loading is studied by an atomistic molecular dynamics simulation. Loading is simulated by specifying an applied pressure of 0.25, 1.0, and 2.5 GPa of both signs. Hydrostatic and uniaxial loading is considered along the [001], [111], [112], and [210] directions. The influence of loading on the energy of point defect formation and the threshold atomic displacement energy in single-component bcc iron is investigated. The 10-keV atomic displacement cascades in a "random" binary Fe-9 at % Cr alloy are simulated at an initial temperature of 300 K. The number of the point defects generated in a cascade is estimated, and the clustering of point defects and the spatial orientation of interstitial configurations are analyzed. Our results agree with the results of other researchers and supplement them.

  3. Winter wheat yield estimation of remote sensing research based on WOFOST crop model and leaf area index assimilation

    NASA Astrophysics Data System (ADS)

    Chen, Yanling; Gong, Adu; Li, Jing; Wang, Jingmei

    2017-04-01

    Accurate crop growth monitoring and yield predictive information are significant to improve the sustainable development of agriculture and ensure the security of national food. Remote sensing observation and crop growth simulation models are two new technologies, which have highly potential applications in crop growth monitoring and yield forecasting in recent years. However, both of them have limitations in mechanism or regional application respectively. Remote sensing information can not reveal crop growth and development, inner mechanism of yield formation and the affection of environmental meteorological conditions. Crop growth simulation models have difficulties in obtaining data and parameterization from single-point to regional application. In order to make good use of the advantages of these two technologies, the coupling technique of remote sensing information and crop growth simulation models has been studied. Filtering and optimizing model parameters are key to yield estimation by remote sensing and crop model based on regional crop assimilation. Winter wheat of GaoCheng was selected as the experiment object in this paper. And then the essential data was collected, such as biochemical data and farmland environmental data and meteorological data about several critical growing periods. Meanwhile, the image of environmental mitigation small satellite HJ-CCD was obtained. In this paper, research work and major conclusions are as follows. (1) Seven vegetation indexes were selected to retrieve LAI, and then linear regression model was built up between each of these indexes and the measured LAI. The result shows that the accuracy of EVI model was the highest (R2=0.964 at anthesis stage and R2=0.920 at filling stage). Thus, EVI as the most optimal vegetation index to predict LAI in this paper. (2) EFAST method was adopted in this paper to conduct the sensitive analysis to the 26 initial parameters of the WOFOST model and then a sensitivity index was constructed to evaluate the influence of each parameter mentioned above on the winter wheat yield formation. Finally, six parameters that sensitivity index more than 0.1 as sensitivity factors were chose, which are TSUM1, SLATB1, SLATB2, SPAN, EFFTB3 and TMPF4. To other parameters, we confirmed them via practical measurement and calculation, available literature or WOFOST default. Eventually, we completed the regulation of WOFOST parameters. (3) Look-up table algorithm was used to realize single-point yield estimation through the assimilation of the WOFOST model and the retrieval LAI. This simulation achieved a high accuracy which perfectly meet the purpose of assimilation (R2=0.941 and RMSE=194.58kg/hm2). In this paper, the optimum value of sensitivity parameters were confirmed and the estimation of single-point yield were finished. Key words: yield estimation of winter wheat, LAI, WOFOST crop growth model, assimilation

  4. An ATP System for Deep-Space Optical Communication

    NASA Technical Reports Server (NTRS)

    Lee, Shinhak; Irtuzm Gerardi; Alexander, James

    2008-01-01

    An acquisition, tracking, and pointing (ATP) system is proposed for aiming an optical-communications downlink laser beam from deep space. In providing for a direction reference, the concept exploits the mature technology of star trackers to eliminate the need for a costly and potentially hazardous laser beacon. The system would include one optical and two inertial sensors, each contributing primarily to a different portion of the frequency spectrum of the pointing signal: a star tracker (<10 Hz), a gyroscope (<50 Hz), and a precise fluid-rotor inertial angular-displacement sensor (sometimes called, simply, "angle sensor") for the frequency range >50 Hz. The outputs of these sensors would be combined in an iterative averaging process to obtain high-bandwidth, high-accuracy pointing knowledge. The accuracy of pointing knowledge obtainable by use of the system was estimated on the basis of an 8-cm-diameter telescope and known parameters of commercially available star trackers and inertial sensors: The single-axis pointing-knowledge error was found to be characterized by a standard deviation of 150 nanoradians - below the maximum value (between 200 and 300 nanoradians) likely to be tolerable in deep-space optical communications.

  5. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  6. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  7. Simulation and analysis of chemical release in the ionosphere

    NASA Astrophysics Data System (ADS)

    Gao, Jing-Fan; Guo, Li-Xin; Xu, Zheng-Wen; Zhao, Hai-Sheng; Feng, Jie

    2018-05-01

    Ionospheric inhomogeneous plasma produced by single point chemical release has simple space-time structure, and cannot impact radio wave frequencies higher than Very High Frequency (VHF) band. In order to produce more complicated ionospheric plasma perturbation structure and trigger instabilities phenomena, multiple-point chemical release scheme is presented in this paper. The effects of chemical release on low latitude ionospheric plasma are estimated by linear instability growth rate theory that high growth rate represents high irregularities, ionospheric scintillation occurrence probability and high scintillation intension in scintillation duration. The amplitude scintillations and the phase scintillations of 150 MHz, 400 MHz, and 1000 MHz are calculated based on the theory of multiple phase screen (MPS), when they propagate through the disturbed area.

  8. Estimating systemic exposure to levonorgestrel from an oral contraceptive.

    PubMed

    Basaraba, Cale N; Westhoff, Carolyn L; Pike, Malcolm C; Nandakumar, Renu; Cremers, Serge

    2017-04-01

    The gold standard for measuring oral contraceptive (OC) pharmacokinetics is the 24-h steady-state area under the curve (AUC). We conducted this study to assess whether limited sampling at steady state or measurements following use of one or two OCs could provide an adequate proxy in epidemiological studies for the progestin 24-h steady-state AUC of a particular OC. We conducted a 13-sample, 24-h pharmacokinetic study on both day 1 and day 21 of the first cycle of a monophasic OC containing 30-mcg ethinyl estradiol and 150-mcg levonorgestrel (LNG) in 17 normal-weight healthy White women and a single-dose 9-sample study of the same OC after a 1-month washout. We compared the 13-sample steady-state results with several steady-state and single-dose results calculated using parsimonious sampling schemes. The 13-sample steady-state 24-h LNG AUC was highly correlated with the steady-state 24-h trough value [r=0.95; 95% confidence interval (0.85, 0.98)] and with the steady-state 6-, 8-, 12- and 16-h values (0.92≤r≤0.95). The trough values after one or two doses were moderately correlated with the steady-state 24-h AUC value [r=0.70; 95% CI (0.27, 0.90) and 0.77; 95% CI (0.40, 0.92), respectively]. Single time-point concentrations at steady state and after administration of one or two OCs gave highly to moderately correlated estimates of steady-state LNG AUC. Using such measures could facilitate prospective pharmaco-epidemiologic studies of the OC and its side effects. A single time-point LNG concentration at steady state is an excellent proxy for complete and resource-intensive steady-state AUC measurement. The trough level after two single doses is a fair proxy for steady-state AUC. These results provide practical tools to facilitate large studies to investigate the relationship between systemic LNG exposure and side effects in a real-life setting. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Advantage of population pharmacokinetic method for evaluating the bioequivalence and accuracy of parameter estimation of pidotimod.

    PubMed

    Huang, Jihan; Li, Mengying; Lv, Yinghua; Yang, Juan; Xu, Ling; Wang, Jingjing; Chen, Junchao; Wang, Kun; He, Yingchun; Zheng, Qingshan

    2016-09-01

    This study was aimed at exploring the accuracy of population pharmacokinetic method in evaluating the bioequivalence of pidotimod with sparse data profiles and whether this method is suitable for bioequivalence evaluation in special populations such as children with fewer samplings. Methods In this single-dose, two-period crossover study, 20 healthy male Chinese volunteers were randomized 1 : 1 to receive either the test or reference formulation, with a 1-week washout before receiving the alternative formulation. Noncompartmental and population compartmental pharmacokinetic analyses were conducted. Simulated data were analyzed to graphically evaluate the model and the pharmacokinetic characteristics of the two pidotimod formulations. Various sparse sampling scenarios were generated from the real bioequivalence clinical trial data and evaluated by population pharmacokinetic method. The 90% confidence intervals (CIs) for AUC0-12h, AUC0-∞, and Cmax were 97.3 - 118.7%, 96.9 - 118.7%, and 95.1 - 109.8%, respectively, within the 80 - 125% range for bioequivalence using noncompartmental analysis. The population compartmental pharmacokinetics of pidotimod were described using a one-compartment model with first-order absorption and lag time. In the comparison of estimations in different dataset, the estimation of random three- and< fixed four-point sampling strategies can provide results similar to those obtained through rich sampling. The nonlinear mixed-effects model requires fewer data points. Moreover, compared with the noncompartmental analysis method, the pharmacokinetic parameters can be more accurately estimated using nonlinear mixed-effects model. The population pharmacokinetic modeling method was used to assess the bioequivalence of two pidotimod formulations with relatively few sampling points and further validated the bioequivalence of the two formulations. This method may provide useful information for regulating bioequivalence evaluation in special populations.

  10. Quantitative optical imaging and sensing by joint design of point spread functions and estimation algorithms

    NASA Astrophysics Data System (ADS)

    Quirin, Sean Albert

    The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.

  11. Stochastic modeling of neurobiological time series: Power, coherence, Granger causality, and separation of evoked responses from ongoing activity

    NASA Astrophysics Data System (ADS)

    Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou

    2006-06-01

    In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.

  12. HIV populations are large and accumulate high genetic diversity in a nonlinear fashion.

    PubMed

    Maldarelli, Frank; Kearney, Mary; Palmer, Sarah; Stephens, Robert; Mican, JoAnn; Polis, Michael A; Davey, Richard T; Kovacs, Joseph; Shao, Wei; Rock-Kress, Diane; Metcalf, Julia A; Rehm, Catherine; Greer, Sarah E; Lucey, Daniel L; Danley, Kristen; Alter, Harvey; Mellors, John W; Coffin, John M

    2013-09-01

    HIV infection is characterized by rapid and error-prone viral replication resulting in genetically diverse virus populations. The rate of accumulation of diversity and the mechanisms involved are under intense study to provide useful information to understand immune evasion and the development of drug resistance. To characterize the development of viral diversity after infection, we carried out an in-depth analysis of single genome sequences of HIV pro-pol to assess diversity and divergence and to estimate replicating population sizes in a group of treatment-naive HIV-infected individuals sampled at single (n = 22) or multiple, longitudinal (n = 11) time points. Analysis of single genome sequences revealed nonlinear accumulation of sequence diversity during the course of infection. Diversity accumulated in recently infected individuals at rates 30-fold higher than in patients with chronic infection. Accumulation of synonymous changes accounted for most of the diversity during chronic infection. Accumulation of diversity resulted in population shifts, but the rates of change were low relative to estimated replication cycle times, consistent with relatively large population sizes. Analysis of changes in allele frequencies revealed effective population sizes that are substantially higher than previous estimates of approximately 1,000 infectious particles/infected individual. Taken together, these observations indicate that HIV populations are large, diverse, and slow to change in chronic infection and that the emergence of new mutations, including drug resistance mutations, is governed by both selection forces and drift.

  13. Graphs to estimate an individualized risk of breast cancer.

    PubMed

    Benichou, J; Gail, M H; Mulvihill, J J

    1996-01-01

    Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.

  14. Comparing Single Case Design Overlap-Based Effect Size Metrics From Studies Examining Speech Generating Device Interventions

    PubMed Central

    Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.

    2017-01-01

    Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with intellectual and developmental disabilities (IDD) with moderate to profound levels of impairment. The effect size metrics included percent of data points exceeding the median (PEM), percent of nonoverlapping data (PND), improvement rate difference (IRD), percent of all nonoverlapping data (PAND), Phi, nonoverlap of all pairs (NAP), and Taunovlap. Results showed that among the seven effect size metrics, PAND, Phi, IRD, and PND were more effective in quantifying intervention effects for the data sample (N = 285 phase or condition contrasts). Results are discussed with respect to issues concerning extracting and calculating effect sizes, visual analysis, and SCD intervention research in IDD. PMID:27119210

  15. Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes

    NASA Astrophysics Data System (ADS)

    Teppati Losè, L.; Chiabrando, F.; Spanò, A.

    2018-05-01

    The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).

  16. Predict Brain MR Image Registration via Sparse Learning of Appearance and Transformation

    PubMed Central

    Wang, Qian; Kim, Minjeong; Shi, Yonghong; Wu, Guorong; Shen, Dinggang

    2014-01-01

    We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially. PMID:25476412

  17. Improving the Patron Experience: Sterling Memorial Library's Single Service Point

    ERIC Educational Resources Information Center

    Sider, Laura Galas

    2016-01-01

    This article describes the planning process and implementation of a single service point at Yale University's Sterling Memorial Library. While much recent scholarship on single service points (SSPs) has focused on the virtues or hazards of eliminating reference desks in libraries nationwide, this essay explores the ways in which single service…

  18. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  19. Intercomparison of methods for the estimation of displacement height and roughness length from single-level eddy covariance data

    NASA Astrophysics Data System (ADS)

    Graf, Alexander; van de Boer, Anneke; Schüttemeyer, Dirk; Moene, Arnold; Vereecken, Harry

    2013-04-01

    The displacement height d and roughness length z0 are parameters of the logarithmic wind profile and as such these are characteristics of the surface, that are required in a multitude of meteorological modeling applications. Classically, both parameters are estimated from multi-level measurements of wind speed over a terrain sufficiently homogeneous to avoid footprint-induced differences between the levels. As a rule-of thumb, d of a dense, uniform crop or forest canopy is 2/3 to 3/4 of the canopy height h, and z0 about 10% of canopy height in absence of any d. However, the uncertainty of this rule-of-thumb becomes larger if the surface of interest is not "dense and uniform", in which case a site-specific determination is required again. By means of the eddy covariance method, alternative possibilities to determine z0 and d have become available. Various authors report robust results if either several levels of sonic anemometer measurements, or one such level combined with a classic wind profile is used to introduce direct knowledge on the friction velocity into the estimation procedure. At the same time, however, the eddy covariance method to measure various fluxes has superseded the profile method, leaving many current stations without a wind speed profile with enough levels sufficiently far above the canopy to enable the classic estimation of z0 and d. From single-level eddy covariance measurements at one point in time, only one parameter can be estimated, usually z0 while d is assumed to be known. Even so, results tend to scatter considerably. However, it has been pointed out, that the use of multiple points in time providing different stability conditions can enable the estimation of both parameters, if they are assumed constant over the time period regarded. These methods either rely on flux-variance similarity (Weaver 1990 and others following), or on the integrated universal function for momentum (Martano 2000 and others following). In both cases, iterations over the range of possible d values are necessary. We extended this set of methods by a non-iterative, regression based approach. Only a stability range of data is used in which the universal function is known to be approximately linear. Then, various types of multiple linear regression can be used to relate the terms of the logarithmic wind profile equation to each other, and derive z0 and d from the regression parameters. Two examples each of the two existing iterative approaches, and the new non-iterative one are compared to each other and to plausibility limits in three different agricultural crops. The study contains periods of growth as well as of constant crop height, also allowing for an examination of the relations between z0, d, and canopy height. Results indicate that estimated z0 values, even in absence of prescribed d values, are fairly robust, plausible and consistent across all methods. The largest deviations are produced by the two flux-variance similarity based methods. Estimates of d, in contrast, can be subject to implausible deviations with all methods, even after quality-filtering of input data. Again, the largest deviations occur with flux-variance similarity based methods. Ensemble averaging between all methods can reduce this problem, offering a potentially useful way of estimating d at more complex sites where the rule-of-thumb cannot be applied easily. Martano P (2000): Estimation of surface roughness length and displacement height from single-level sonic anemometer data. Journal of Applied Meteorology 39:708-715. Weaver HL (1990): Temperature and Humidity flux-variance relations determined by one-dimensional eddy correlation. Boundary-Layer Meteorology 53:77-91.

  20. [Microbiological quality of the air in "small gastronomy point"].

    PubMed

    Wójcik-Stopczyńska, Barbara

    2006-01-01

    The aim of this work was the estimation of microbial contamination of the air in "small gastronomy point". The study included three places, which have been separated on the ground of their function: 1. area of subsidiaries, 2. area of distribution (sale and serving meal), 3. area of consumption. The total numbers of aerobic mesophilic bacteria, yeasts and moulds were determined by sedimentation method. Taxonomy units of fungal aerosol were also estimated. The samples of air were collected in 16 investigation points in the morning (8-8.30) and in the afternoon (14-14.30). Four series of measurements were carried out and in general 128 of air samples were tested. The results showed that numbers of bacteria, yeasts and moulds were variable and received respectively 30-3397, 0-254 and 0-138 cfu x m(-3). Microbial contamination of air changed depending on area character (the highest average count of bacteria occurred in the air of consumption area and fungi in subsidiaries area), time of a day (contamination of the air increased in the afternoon) and determination date. Only in single samples the numbers of bacteria and fungi were higher than recommended level. Pigmentary bacteria had high participation in total count of bacteria and filamentous fungi were represented mostly by Penicillium sp. and Cladosporium sp.

  1. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  2. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  3. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    PubMed

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Using Field-Particle Correlations to Diagnose the Collisionless Damping of Plasma Turbulence

    NASA Astrophysics Data System (ADS)

    Howes, Gregory; Klein, Kristropher

    2016-10-01

    Plasma turbulence occurs ubiquitously throughout the heliosphere, yet our understanding of how turbulence governs energy transport and plasma heating remains incomplete, constituting a grand challenge problem in heliophysics. In weakly collisional heliospheric plasmas, such as the solar corona and solar wind, damping of the turbulent fluctuations occurs due to collisionless interactions between the electromagnetic fields and the individual plasma particles. A particular challenge in diagnosing this energy transfer is that spacecraft measurements are typically limited to a single point in space. Here we present an innovative field-particle correlation technique that can be used with single-point measurements to estimate the energization of the plasma particles due to the damping of the electromagnetic fields, providing vital new information about this how energy transfer is distributed as a function of particle velocity. This technique has the promise to transform our ability to diagnose the kinetic plasma physical mechanisms responsible for not only the damping of turbulence, but also the energy conversion in both collisionless magnetic reconnection and particle acceleration. The work has been supported by NSF CAREER Award AGS-1054061, NSF AGS-1331355, and DOE DE-SC0014599.

  5. Low cost sensing of vegetation volume and structure with a Microsoft Kinect sensor

    NASA Astrophysics Data System (ADS)

    Azzari, G.; Goulden, M.

    2011-12-01

    The market for videogames and digital entertainment has decreased the cost of advanced technology to affordable levels. The Microsoft Kinect sensor for Xbox 360 is an infrared time of flight camera designed to track body position and movement at a single-articulation level. Using open source drivers and libraries, we acquired point clouds of vegetation directly from the Kinect sensor. The data were filtered for outliers, co-registered, and cropped to isolate the plant of interest from the surroundings and soil. The volume of single plants was then estimated with several techniques, including fitting with solid shapes (cylinders, spheres, boxes), voxel counts, and 3D convex/concave hulls. Preliminary results are presented here. The volume of a series of wild artichoke plants was measured from nadir using a Kinect on a 3m-tall tower. The calculated volumes were compared with harvested biomass; comparisons and derived allometric relations will be presented, along with examples of the acquired point clouds. This Kinect sensor shows promise for ground-based, automated, biomass measurement systems, and possibly for comparison/validation of remotely sensed LIDAR.

  6. Single shot, three-dimensional fluorescence microscopy with a spatially rotating point spread function

    PubMed Central

    Wang, Zhaojun; Cai, Yanan; Liang, Yansheng; Zhou, Xing; Yan, Shaohui; Dan, Dan; Bianco, Piero R.; Lei, Ming; Yao, Baoli

    2017-01-01

    A wide-field fluorescence microscope with a double-helix point spread function (PSF) is constructed to obtain the specimen’s three-dimensional distribution with a single snapshot. Spiral-phase-based computer-generated holograms (CGHs) are adopted to make the depth-of-field of the microscope adjustable. The impact of system aberrations on the double-helix PSF at high numerical aperture is analyzed to reveal the necessity of the aberration correction. A modified cepstrum-based reconstruction scheme is promoted in accordance with properties of the new double-helix PSF. The extended depth-of-field images and the corresponding depth maps for both a simulated sample and a tilted section slice of bovine pulmonary artery endothelial (BPAE) cells are recovered, respectively, verifying that the depth-of-field is properly extended and the depth of the specimen can be estimated at a precision of 23.4nm. This three-dimensional fluorescence microscope with a framerate-rank time resolution is suitable for studying the fast developing process of thin and sparsely distributed micron-scale cells in extended depth-of-field. PMID:29296483

  7. M(sub W) = 7.2-7.4 Estimated for A.D. 900 Seattle Fault Earthquake by Modeling the Uplift of a Lidar-Mapped Marine Terrace

    NASA Technical Reports Server (NTRS)

    Muller, Jordan R.; Harding, David J.

    2006-01-01

    Inverse modeling of slip on the Seattle fault system, constrained by elevations of uplifted marine terraces, provides a well-constrained estimate of the magnitude of the largest known upper-crust earthquake in the Puget Sound region within the past 2500 years. The terrace elevations that constrain the slip inversion are extracted from elevation and slope images generated from LIDAR surveys of the Puget Sound collected in 1996-2002. The images reveal a single uplifted terrace, dated to 1000 cal yr B.P. near Restoration Point, which is morphologically continuous along the southern shoreline of Bainbridge Island and is visible at comparable elevations within a 25 km by 12 km region encompassing coastlines of West Seattle, Bremerton, East Bremerton, Port Orchard, and Waterman Point. Considering sea level changes since A.D. 900, the maximum uplift magnitudes of shoreline inner edges approach 9 m and are located at the southernmost coastline of Bainbridge Island and the northern tip of Waterman Point, while tilt magnitudes are modest - approaching 0.1 degrees. For each of several different Seattle fault geometry interpretations, we use a linear inversion code to solve for distributed slip on the fault surfaces. Moment magnitudes of 7.2 to 7.4 are calculated directly from the different slip solutions. In general, the greatest slip of the A.D. 900 event was confined to the frontal thrust of the Seattle fault system and was centered beneath Puget Sound between Restoration Point and Alki Point.

  8. Off-line tracking of series parameters in distribution systems using AMI data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Sun, Yannan; Schneider, Kevin

    2016-05-01

    Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less

  9. Combining statistical inference and decisions in ecology

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.

    2016-01-01

    Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.

  10. Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

    PubMed Central

    Moccia, Antonio

    2014-01-01

    Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

  11. Retrospective cost-effectiveness analyses for polio vaccination in the United States.

    PubMed

    Thompson, Kimberly M; Tebbens, Radboud J Duintjer

    2006-12-01

    The history of polio vaccination in the United States spans 50 years and includes different phases of the disease, multiple vaccines, and a sustained significant commitment of resources. We estimated cost-effectiveness ratios and assessed the net benefits of polio vaccination applicable at various points in time from the societal perspective and we discounted these back to appropriate points in time. We reconstructed vaccine price data from available sources and used these to retrospectively estimate the total costs of the U.S. historical polio vaccination strategies (all costs reported in year 2002 dollars). We estimate that the United States invested approximately US dollars 35 billion (1955 net present value, discount rate of 3%) in polio vaccines between 1955 and 2005 and will invest approximately US dollars 1.4 billion (1955 net present value, or US dollars 6.3 billion in 2006 net present value) between 2006 and 2015 assuming a policy of continued use of inactivated poliovirus vaccine (IPV) for routine vaccination. The historical and future investments translate into over 1.7 billion vaccinations that prevent approximately 1.1 million cases of paralytic polio and over 160,000 deaths (1955 net present values of approximately 480,000 cases and 73,000 deaths). Due to treatment cost savings, the investment implies net benefits of approximately US dollars 180 billion (1955 net present value), even without incorporating the intangible costs of suffering and death and of averted fear. Retrospectively, the U.S. investment in polio vaccination represents a highly valuable, cost-saving public health program. Observed changes in the cost-effectiveness ratio estimates over time suggest the need for living economic models for interventions that appropriately change with time. This article also demonstrates that estimates of cost-effectiveness ratios at any single time point may fail to adequately consider the context of the investment made to date and the importance of population and other dynamics, and shows the importance of dynamic modeling.

  12. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  13. Critical behaviour and vapour-liquid coexistence of 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide ionic liquids via Monte Carlo simulations.

    PubMed

    Rai, Neeraj; Maginn, Edward J

    2012-01-01

    Atomistic Monte Carlo simulations are used to compute vapour-liquid coexistence properties of a homologous series of [C(n)mim][NTf2] ionic liquids, with n = 1, 2, 4, 6. Estimates of the critical temperatures range from 1190 K to 1257 K, with longer cation alkyl chains serving to lower the critical temperature. Other quantities such as critical density, critical pressure, normal boiling point, and accentric factor are determined from the simulations. Vapour pressure curves and the temperature dependence of the enthalpy of vapourisation are computed and found to have a weak dependence on the length of the cation alkyl chain. The ions in the vapour phase are predominately in single ion pairs, although a significant number of ions are found in neutral clusters of larger sizes as temperature is increased. It is found that previous estimates of the critical point obtained from extrapolating experimental surface tension data agree reasonably well with the predictions obtained here, but group contribution methods and primitive models of ionic liquids do not capture many of the trends observed in the present study

  14. Quantum key distribution in a multi-user network at gigahertz clock rates

    NASA Astrophysics Data System (ADS)

    Fernandez, Veronica; Gordon, Karen J.; Collins, Robert J.; Townsend, Paul D.; Cova, Sergio D.; Rech, Ivan; Buller, Gerald S.

    2005-07-01

    In recent years quantum information research has lead to the discovery of a number of remarkable new paradigms for information processing and communication. These developments include quantum cryptography schemes that offer unconditionally secure information transport guaranteed by quantum-mechanical laws. Such potentially disruptive security technologies could be of high strategic and economic value in the future. Two major issues confronting researchers in this field are the transmission range (typically <100km) and the key exchange rate, which can be as low as a few bits per second at long optical fiber distances. This paper describes further research of an approach to significantly enhance the key exchange rate in an optical fiber system at distances in the range of 1-20km. We will present results on a number of application scenarios, including point-to-point links and multi-user networks. Quantum key distribution systems have been developed, which use standard telecommunications optical fiber, and which are capable of operating at clock rates of up to 2GHz. They implement a polarization-encoded version of the B92 protocol and employ vertical-cavity surface-emitting lasers with emission wavelengths of 850 nm as weak coherent light sources, as well as silicon single-photon avalanche diodes as the single photon detectors. The point-to-point quantum key distribution system exhibited a quantum bit error rate of 1.4%, and an estimated net bit rate greater than 100,000 bits-1 for a 4.2 km transmission range.

  15. Nonlinear Dynamic Modeling of Neuron Action Potential Threshold During Synaptically Driven Broadband Intracellular Activity

    PubMed Central

    Roach, Shane M.; Song, Dong; Berger, Theodore W.

    2012-01-01

    Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947

  16. A high-resolution optical rangefinder using tunable focus optics and spatial photonic signal processing

    NASA Astrophysics Data System (ADS)

    Khwaja, Tariq S.; Mazhar, Mohsin Ali; Niazi, Haris Khan; Reza, Syed Azer

    2017-06-01

    In this paper, we present the design of a proposed optical rangefinder to determine the distance of a semi-reflective target from the sensor module. The sensor module deploys a simple Tunable Focus Lens (TFL), a Laser Source (LS) with a Gaussian Beam profile and a digital beam profiler/imager to achieve its desired operation. We show that, owing to the nature of existing measurement methodologies, previous attempts to use a simple TFL in prior art to estimate target distance mostly deliver "one-shot" distance measurement estimates instead of obtaining and using a larger dataset which can significantly reduce the effect of some largely incorrect individual data points on the final distance estimate. Using a measurement dataset and calculating averages also helps smooth out measurement errors in individual data points through effectively low-pass filtering unexpectedly odd measurement offsets in individual data points. In this paper, we show that a simple setup deploying an LS, a TFL and a beam profiler or imager is capable of delivering an entire measurement dataset thus effectively mitigating the effects on measurement accuracy which are associated with "one-shot" measurement techniques. The technique we propose allows a Gaussian Beam from an LS to pass through the TFL. Tuning the focal length of the TFL results in altering the spot size of the beam at the beam imager plane. Recording these different spot radii at the plane of the beam profiler for each unique setting of the TFL provides us with a means to use this measurement dataset to obtain a significantly improved estimate of the target distance as opposed to relying on a single measurement. We show that an iterative least-squares curve-fit on the recorded data allows us to estimate distances of remote objects very precisely. We also show that using some basic ray-optics-based approximations, we also obtain an initial seed value for distance estimate and subsequently use this value to obtain a more precise estimate through an iterative residual reduction in the least-squares sense. In our experiments, we use a MEMS-based Digital Micro-mirror Device (DMD) as a beam imager/profiler as it delivers an accurate estimate of a Gaussian Beam profile. The proposed method, its working and the distance estimation methodology are discussed in detail. For a proof-of-concept, we back our claims with initial experimental results.

  17. Heuristic estimation of electromagnetically tracked catheter shape for image-guided vascular procedures

    NASA Astrophysics Data System (ADS)

    Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.

    2014-03-01

    In our previous work we presented a novel image-guided surgery (IGS) system, Kit for Navigation by Image Focused Exploration (KNIFE).1,2 KNIFE has been demonstrated to be effective in guiding mock clinical procedures with the tip of an electromagnetically tracked catheter overlaid onto a pre-captured bi-plane fluoroscopic loop. Representation of the catheter in KNIFE differs greatly from what is captured by the fluoroscope, due to distortions and other properties of fluoroscopic images. When imaged by a fluoroscope, catheters can be visualized due to the inclusion of radiopaque materials (i.e. Bi, Ba, W) in the polymer blend.3 However, in KNIFE catheter location is determined using a single tracking seed located in the catheter tip that is represented as a single point overlaid on pre-captured fluoroscopic images. To bridge the gap in catheter representation between KNIFE and traditional methods we constructed a catheter with five tracking seeds positioned along the distal 70 mm of the catheter. We have currently investigated the use of four spline interpolation methods for estimation of true catheter shape and have assesed the error in their estimation of true catheter shape. In this work we present a method for the evaluation of interpolation algorithms with respect to catheter shape determination.

  18. Interpreting Repeated Temperature-Depth Profiles for Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Bense, Victor F.; Kurylyk, Barret L.; van Daal, Jonathan; van der Ploeg, Martine J.; Carey, Sean K.

    2017-10-01

    Temperature can be used to trace groundwater flows due to thermal disturbances of subsurface advection. Prior hydrogeological studies that have used temperature-depth profiles to estimate vertical groundwater fluxes have either ignored the influence of climate change by employing steady-state analytical solutions or applied transient techniques to study temperature-depth profiles recorded at only a single point in time. Transient analyses of a single profile are predicated on the accurate determination of an unknown profile at some time in the past to form the initial condition. In this study, we use both analytical solutions and a numerical model to demonstrate that boreholes with temperature-depth profiles recorded at multiple times can be analyzed to either overcome the uncertainty associated with estimating unknown initial conditions or to form an additional check for the profile fitting. We further illustrate that the common approach of assuming a linear initial temperature-depth profile can result in significant errors for groundwater flux estimates. Profiles obtained from a borehole in the Veluwe area, Netherlands in both 1978 and 2016 are analyzed for an illustrative example. Since many temperature-depth profiles were collected in the late 1970s and 1980s, these previously profiled boreholes represent a significant and underexploited opportunity to obtain repeat measurements that can be used for similar analyses at other sites around the world.

  19. Heat-flow measurements at shot points along the 1978 Saudi Arabia seismic deep-refraction line; Part I, Results of the measurements

    USGS Publications Warehouse

    Gettings, M.E.; Showail, Abdullah

    1982-01-01

    Heat-flow measurements were made at five onland shot points of the 1978 Saudi Arabian seismic deep-refraction line, which sample major tectonic elements of the Arabian Shield along a profile from Ar Riyad to the Farasan Islands. Because of the pattern drilling at each shot point, several holes (60 m deep) could be logged for temperature at each site and thus allow a better estimate of the geothermal gradient. Each site was mapped and sampled in detail, and modal and. chemical analyses of representative specimens were made in the laboratory. Thermal conductivities were computed from the modal analyses and single-mineral conductivity data. The resulting heat-flow values, combined with published values for the Red Sea and coastal plain, indicate a three-level pattern, with a heat flow of about 4.5 heat-flow unit (HFU) over the Red Sea axial trough, about 3.0 HFU over the shelf and coastal plain, and an essentially constant 1.0 HFU over the Arabian Shield at points well away from the suture zone with the oceanic crust. At three sites where the rocks are granitic, gamma-ray spectrometry techniques were employed to estimate thorium, potassium, and uranium concentrations. The resulting plot of heat generation versus heat flow suggests that in the Arabian Shield the relationship between heat flow and heat production is not linear. More heat-flow data are essential to establish or reject this conclusion.

  20. Improved Modeling of Three-Point Estimates for Decision Making: Going Beyond the Triangle

    DTIC Science & Technology

    2016-03-01

    OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE by Daniel W. Mulligan March 2016 Thesis Advisor: Mark Rhoades...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND...unlimited IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE Daniel W. Mulligan Civilian, National

  1. The Role of Skull Modeling in EEG Source Imaging for Patients with Refractory Temporal Lobe Epilepsy.

    PubMed

    Montes-Restrepo, Victoria; Carrette, Evelien; Strobbe, Gregor; Gadeyne, Stefanie; Vandenberghe, Stefaan; Boon, Paul; Vonck, Kristl; Mierlo, Pieter van

    2016-07-01

    We investigated the influence of different skull modeling approaches on EEG source imaging (ESI), using data of six patients with refractory temporal lobe epilepsy who later underwent successful epilepsy surgery. Four realistic head models with different skull compartments, based on finite difference methods, were constructed for each patient: (i) Three models had skulls with compact and spongy bone compartments as well as air-filled cavities, segmented from either computed tomography (CT), magnetic resonance imaging (MRI) or a CT-template and (ii) one model included a MRI-based skull with a single compact bone compartment. In all patients we performed ESI of single and averaged spikes marked in the clinical 27-channel EEG by the epileptologist. To analyze at which time point the dipole estimations were closer to the resected zone, ESI was performed at two time instants: the half-rising phase and peak of the spike. The estimated sources for each model were validated against the resected area, as indicated by the postoperative MRI. Our results showed that single spike analysis was highly influenced by the signal-to-noise ratio (SNR), yielding estimations with smaller distances to the resected volume at the peak of the spike. Although averaging reduced the SNR effects, it did not always result in dipole estimations lying closer to the resection. The proposed skull modeling approaches did not lead to significant differences in the localization of the irritative zone from clinical EEG data with low spatial sampling density. Furthermore, we showed that a simple skull model (MRI-based) resulted in similar accuracy in dipole estimation compared to more complex head models (based on CT- or CT-template). Therefore, all the considered head models can be used in the presurgical evaluation of patients with temporal lobe epilepsy to localize the irritative zone from low-density clinical EEG recordings.

  2. Possible generational effects of habitat degradation on alligator reproduction

    USGS Publications Warehouse

    Fujisaki, Ikuko; Rice, K.G.; Woodward, A.R.; Percival, H.F.

    2007-01-01

    Population decline of the American alligator (Alligator mississippiensis) was observed in Lake Apopka in central Florida, USA, in the early 1980s. This decline was thought to result from adult mortality and nest failure caused by anthropogenic increases in sediment loads, nutrients, and contaminants. Reproductive impairment also was reported. Extensive restoration of marshes associated with Lake Apopka has been conducted, as well as some limited restoration measures on the lake. Monitoring by the Florida Fish and Wildlife Conservation Commission (FFWCC) has indicated that the adult alligator population began increasing in the early 1990s. We expected that the previously reported high proportion of complete nest failure (??0) during the 1980s may have decreased. We collected clutches from alligator nests in Lake Apopka from 1983 to 2003 and from 5 reference areas from 1988 to 1991, and we artificially incubated them. We used a Bayesian framework with Gibbs sampler of Markov chain Monte Carlo simulation to analyze ??0. Estimated ??0was consistently higher in Lake Apopka compared with reference areas, and the difference in ??0 ranged from 0.19 to 0.56. We conducted change point analysis to identify and test the significance of the change point in ??0in Lake Apopka between 1983 and 2003, indicating the point of reproductive recovery. The estimated Bayes factor strongly supported the single change point hypothesis against the no change point hypothesis. The major downward shift in ??0 probably occurred in the mid-1990s, approximately a generation after the major population decline in the 1980s. Furthermore, estimated ??0 values after the change point (0.21) were comparable with those of reference areas (0.07-0.31). These results combined with the monitoring by FFWCC seem to suggest that anthropogenic habitat degradation caused reproductive impairment of adult females and decreases in ??0 occurred with the sexual maturity of a new generation of breeding females. Long-term monitoring is essential to understand population changes due to habitat restoration. Such information can be used as an input in planning and evaluating restoration activities.

  3. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  4. Feasibility study on the ultra-small launch vehicle

    NASA Astrophysics Data System (ADS)

    Hayashi, T.; Matsuo, H.; Yamamoto, H.; Orii, T.; Kimura, A.

    1986-10-01

    An idea for a very small satellite launcher and a very small satellite is presented. The launcher is a three staged solid rocket based on a Japanese single stage sounding rocket S-520. Its payload capability is estimated to be 17 kg into 200 x 1000 km elliptical orbit. The spin-stabilized satellite with sun-pointing capability, though small, has almost all functions necessary for usual satellites. In its design, universality is stressed to meet various kinds of mission interface requirements; it can afford 5 kg to mission instruments.

  5. An object-oriented software for fate and exposure assessments.

    PubMed

    Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M

    1995-07-01

    The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.

  6. Who pays for agricultural injury care?

    PubMed

    Costich, Julia

    2010-01-01

    Analysis of 295 agricultural injury hospitalizations in a single state's hospital discharge database found that workers' compensation covered only 5% of the inpatient stays. Other sources were commercial health insurance (47%), Medicare (31%), and Medicaid (7%); 9% were uninsured. Estimated mean hospital and physician payments (not costs or charges) were $12,056 per hospitalization. Nearly one sixth (16%) of hospitalizations were either unreimbursed or covered by Medicaid, indicating a substantial cost-shift to public funding sources. Problems in characterizing agricultural injuries and states' exceptions to workers' compensation coverage mandates point to the need for comprehensive health coverage.

  7. Scleroderma prevalence: demographic variations in a population-based sample.

    PubMed

    Bernatsky, S; Joseph, L; Pineau, C A; Belisle, P; Hudson, M; Clarke, A E

    2009-03-15

    To estimate the prevalence of systemic sclerosis (SSc) using population-based administrative data, and to assess the sensitivity of case ascertainment approaches. We ascertained SSc cases from Quebec physician billing and hospitalization databases (covering approximately 7.5 million individuals). Three case definition algorithms were compared, and statistical methods accounting for imperfect case ascertainment were used to estimate SSc prevalence and case ascertainment sensitivity. A hierarchical Bayesian latent class regression model that accounted for possible between-test dependence conditional on disease status estimated the effect of patient characteristics on SSc prevalence and the sensitivity of the 3 ascertainment algorithms. Accounting for error inherent in both the billing and the hospitalization data, we estimated SSc prevalence in 2003 at 74.4 cases per 100,000 women (95% credible interval [95% CrI] 69.3-79.7) and 13.3 cases per 100,000 men (95% CrI 11.1-16.1). Prevalence was higher for older individuals, particularly in urban women (161.2 cases per 100,000, 95% CrI 148.6-175.0). Prevalence was lowest in young men (in rural areas, as low as 2.8 cases per 100,000, 95% CrI 1.4-4.8). In general, no single algorithm was very sensitive, with point estimates for sensitivity ranging from 20-73%. We found marked differences in SSc prevalence according to age, sex, and region. In general, no single case ascertainment approach was very sensitive for SSc. Therefore, using data from multiple sources, with adjustment for the imperfect nature of each, is an important strategy in population-based studies of SSc and similar conditions.

  8. Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.

    2013-12-01

    We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.

  9. Multi-GNSS precise point positioning (MGPPP) using raw observations

    NASA Astrophysics Data System (ADS)

    Liu, Teng; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Tan, Bingfeng; Chen, Yongchang

    2017-03-01

    A joint-processing model for multi-GNSS (GPS, GLONASS, BDS and GALILEO) precise point positioning (PPP) is proposed, in which raw code and phase observations are used. In the proposed model, inter-system biases (ISBs) and GLONASS code inter-frequency biases (IFBs) are carefully considered, among which GLONASS code IFBs are modeled as a linear function of frequency numbers. To get the full rank function model, the unknowns are re-parameterized and the estimable slant ionospheric delays and ISBs/IFBs are derived and estimated simultaneously. One month of data in April, 2015 from 32 stations of the International GNSS Service (IGS) Multi-GNSS Experiment (MGEX) tracking network have been used to validate the proposed model. Preliminary results show that RMS values of the positioning errors (with respect to external double-difference solutions) for static/kinematic solutions (four systems) are 6.2 mm/2.1 cm (north), 6.0 mm/2.2 cm (east) and 9.3 mm/4.9 cm (up). One-day stabilities of the estimated ISBs described by STD values are 0.36 and 0.38 ns, for GLONASS and BDS, respectively. Significant ISB jumps are identified between adjacent days for all stations, which are caused by the different satellite clock datums in different days and for different systems. Unlike ISBs, the estimated GLONASS code IFBs are quite stable for all stations, with an average STD of 0.04 ns over a month. Single-difference experiment of short baseline shows that PPP ionospheric delays are more precise than traditional leveling ionospheric delays.

  10. Evaluating The Reliability of Point Estimates of Wetland Evaporation

    NASA Astrophysics Data System (ADS)

    Gavin, H.; Agnew, C. T.

    The Penman-Monteith formulation of evaporation has been criticised for its reliance upon point estimates raising concerns that areal estimates of wetland evaporation based upon single weather stations can be misleading. Typically wetlands are composed of a complex mosaic of land cover types each of which can produce different evaporative rates. The need to account for wetland patches when monitoring hydrological fluxes has been noted, while Morton (1983) has long argued for a fundamentally different approach to the calculation of regional evaporation. This paper presents the work carried out at wet grassland in Southern England that was monitored with several automatic weather stations (AWS) and a bowen ratio station to investigate microclimate variations. The significance of fetch was examined using the approach adopted by Gash (1986) based upon surface roughness to estimate the fraction of evaporation sensed from a specific distance upwind of the monitoring station. This theoretical analysis reveals that the fraction of evaporation contributed by the surrounding area steadily increases to a value of 77% at a distance of 224m and thereafter declines rapidly, under stable atmospheric conditions. Thus point climate observations may not reflect surface conditions at greater distances. This result was tested through the deployment offour AWS around the wetland. The data yielded a different response, suggesting that homogeneous conditions prevailed and the central AWS did provide reliable areal estimates of evaporation. The apparent contradiction is a result of not accounting for wind speeds found in wetlands that lead to widespread atmospheric mixing. These findings are typical of moist conditions whereas for example Guo and Scheupp (1994) found that a patchwork of dry fields and wet ditches, characteristic of the study site in summer, could produce differences of up to 50% in evaporation. The paper will also present the initial results of an investigation of the role of dry patches upon wetland evaporation estimates. Morton, F.I. 1983 Operational estimates of evapotranspiration and their significance to the science and practice of hydrology. Journal of Hydrology 66 1:76. Gash, J.H.C. 1986 A note on estimating the effect of limited fetch on micrometeorological evaporation measurements. Boundary Layer Meteorology 35: 409-413. Guo, Y. Schuepp, P.H. 1994a On surface energy balance over the northern wetlands 1. The effects of small-scale temperature and wetness heterogeneity. Journal of Geophysical Research 99 (D1) 1601-1612.

  11. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  12. Single-hit mechanism of tumour cell killing by radiation.

    PubMed

    Chapman, J D

    2003-02-01

    To review the relative importance of the single-hit mechanism of radiation killing for tumour response to 1.8-2.0 Gy day(-1) fractions and to low dose-rate brachytherapy. Tumour cell killing by ionizing radiation is well described by the linear-quadratic equation that contains two independent components distinguished by dose kinetics. Analyses of tumour cell survival curves that contain six or more dose points usually provide good estimates of the alpha- and beta-inactivation coefficients. Superior estimates of tumour cell intrinsic radiosensitivity are obtained when synchronized populations are employed. The characteristics of single-hit inactivation of tumour cells are reviewed and compared with the characteristics of beta-inactivation. Potential molecular targets associated with single-hit inactivation are discussed along with strategies for potentiating cell killing by this mechanism. The single-hit mechanism of tumour cell killing shows no dependence on dose-rate and, consequently, no evidence of sublethal damage repair. It is uniquely potentiated by high linear-energy-transfer radiation, exhibits a smaller oxygen enhancement ratio and exhibits a larger indirect effect by hydroxyl radicals than the beta-mechanism. alpha-inactivation coefficients vary slightly throughout interphase but mitotic cells exhibit extremely high alpha-coefficients in the range of those observed for lymphocytes and some repair-deficient cells. Evidence is accumulating to suggest that chromatin in compacted form could be a radiation-hypersensitive target associated with single-hit radiation killing. Analyses of tumour cell survival curves demonstrate that it is the single-hit mechanism (alpha) that determines the majority of cell killing after doses of 2Gy and that this mechanism is highly variable between tumour cell lines. The characteristics of single-hit inactivation are qualitatively and quantitatively distinct from those of beta-inactivation. Compacted chromatin in tumour cells should be further investigated as a radiation-hypersensitive target that could be modulated for therapeutic advantage.

  13. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  14. Use of three-point taper systems in timber cruising

    Treesearch

    James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes

    2000-01-01

    Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...

  15. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  16. Pilot estimates of glidepath and aim point during simulated landing approaches

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    1981-01-01

    Pilot perceptions of glidepath angle and aim point were measured during simulated landings. A fixed-base cockpit simulator was used with video recordings of simulated landing approaches shown on a video projector. Pilots estimated the magnitudes of approach errors during observation without attempting to make corrections. Pilots estimated glidepath angular errors well, but had difficulty estimating aim-point errors. The data make plausible the hypothesis that pilots are little concerned with aim point during most of an approach, concentrating instead on keeping close to the nominal glidepath and trusting this technique to guide them to the proper touchdown point.

  17. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  18. Evaluation of effects of long term exposure on lethal toxicity with mammals.

    PubMed

    Verma, Vibha; Yu, Qiming J; Connell, Des W

    2014-02-01

    The relationship between exposure time (LT50) and lethal exposure concentration (LC50) has been evaluated over relatively long exposure times using a novel parameter, Normal Life Expectancy (NLT), as a long term toxicity point. The model equation, ln(LT50) = aLC50(ν) + b, where a, b and ν are constants, was evaluated by plotting lnLT50 against LC50 using available toxicity data based on inhalation exposure from 7 species of mammals. With each specific toxicant a single consistent relationship was observed for all mammals with ν always <1. Use of NLT as a long term toxicity point provided a valuable limiting point for long exposure times. With organic compounds, the Kow can be used to calculate the model constants a and v where these are unknown. The model can be used to characterise toxicity to specific mammals and then be extended to estimate toxicity at any exposure time with other mammals. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  19. Computed potential energy surfaces for chemical reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.

    1988-01-01

    The minimum energy path for the addition of a hydrogen atom to N2 is characterized in CASSCF/CCI calculations using the (4s3p2d1f/3s2p1d) basis set, with additional single point calculations at the stationary points of the potential energy surface using the (5s4p3d2f/4s3p2d) basis set. These calculations represent the most extensive set of ab initio calculations completed to date, yielding a zero point corrected barrier for HN2 dissociation of approx. 8.5 kcal mol/1. The lifetime of the HN2 species is estimated from the calculated geometries and energetics using both conventional Transition State Theory and a method which utilizes an Eckart barrier to compute one dimensional quantum mechanical tunneling effects. It is concluded that the lifetime of the HN2 species is very short, greatly limiting its role in both termolecular recombination reactions and combustion processes.

  20. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  1. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    NASA Astrophysics Data System (ADS)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  2. Random regression models for the prediction of days to weight, ultrasound rib eye area, and ultrasound back fat depth in beef cattle.

    PubMed

    Speidel, S E; Peel, R K; Crews, D H; Enns, R M

    2016-02-01

    Genetic evaluation research designed to reduce the required days to a specified end point has received very little attention in pertinent scientific literature, given that its economic importance was first discussed in 1957. There are many production scenarios in today's beef industry, making a prediction for the required number of days to a single end point a suboptimal option. Random regression is an attractive alternative to calculate days to weight (DTW), days to ultrasound back fat (DTUBF), and days to ultrasound rib eye area (DTUREA) genetic predictions that could overcome weaknesses of a single end point prediction. The objective of this study was to develop random regression approaches for the prediction of the DTW, DTUREA, and DTUBF. Data were obtained from the Agriculture and Agri-Food Canada Research Centre, Lethbridge, AB, Canada. Data consisted of records on 1,324 feedlot cattle spanning 1999 to 2007. Individual animals averaged 5.77 observations with weights, ultrasound rib eye area (UREA), ultrasound back fat depth (UBF), and ages ranging from 293 to 863 kg, 73.39 to 129.54 cm, 1.53 to 30.47 mm, and 276 to 519 d, respectively. Random regression models using Legendre polynomials were used to regress age of the individual on weight, UREA, and UBF. Fixed effects in the model included an overall fixed regression of age on end point (weight, UREA, and UBF) nested within breed to account for the mean relationship between age and weight as well as a contemporary group effect consisting of breed of the animal (Angus, Charolais, and Charolais sired), feedlot pen, and year of measure. Likelihood ratio tests were used to determine the appropriate random polynomial order. Use of the quadratic polynomial did not account for any additional genetic variation in days for DTW ( > 0.11), for DTUREA ( > 0.18), and for DTUBF ( > 0.20) when compared with the linear random polynomial. Heritability estimates from the linear random regression for DTW ranged from 0.54 to 0.74, corresponding to end points of 293 and 863 kg, respectively. Heritability for DTUREA ranged from 0.51 to 0.34 and for DTUBF ranged from 0.55 to 0.37. These estimates correspond to UREA end points of 35 and 125 cm and UBF end points of 1.53 and 30 mm, respectively. This range of heritability shows DTW, DTUREA, and DTUBF to be highly heritable and indicates that selection pressure aimed at reducing the number of days to reach a finish weight end point can result in genetic change given sufficient data.

  3. Super-resolution image reconstruction from UAS surveillance video through affine invariant interest point-based motion estimation

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent

    2008-01-01

    In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.

  4. Percutaneous nephrostomy for symptomatic hypermobile kidney: a single centre experience.

    PubMed

    Starownik, Radosław; Golabek, Tomasz; Bar, Krzysztof; Muc, Kamil; Płaza, Paweł; Chlosta, Piotr

    2014-12-01

    Symptomatic hypermobile kidney is treated with nephropexy, a surgical procedure through which the floating kidney is fixed to the retroperitoneum. Although both open and endoscopic procedures have a high success rate, they can be associated with risk of complications, relatively long hospital stay and high cost. We describe our percutaneous technique for fixing a hypermobile kidney and evaluate the efficacy of the percutaneous nephrostomy insertion in management of symptomatic nephroptosis. Between January 2005 and December 2011, 11 patients diagnosed with a symptomatic right nephroptosis of at least 1 year duration were treated with a single point percutaneous nephrostomy technique. All data were retrieved from patients' medical records and then retrospectively analysed. Nephropexy through a single point percutaneous nephrostomy technique was successfully accomplished in 11 women. The mean operative time was 20 min. The intraoperative estimated blood loss was minimal in all cases. No major or minor intraoperative complications were noted. The average postoperative hospital stay was 2 days. Women returned to their usual activities 14 days following the surgery. Nine women had complete resolution of their pain, and 2 patients continued to complain of discomfort in their lumbar area. One patient was re-operated upon with satisfactory subjective and objective outcomes achieved. One patient refused re-operation. Percutaneous nephropexy is simple, inexpensive and effective for treatment of symptomatic hypermobile kidney. It remains a valuable alternative to open, laparoscopic, and robotic methods for fixing a floating kidney.

  5. Evaluation of the Potential for Drug Interactions With Patiromer in Healthy Volunteers

    PubMed Central

    Offman, Elliot; Brew, Christine Taylor; Garza, Dahlia; Benton, Wade; Mayo, Martha R.; Romero, Alain; Du Mond, Charles; Weir, Matthew R.

    2017-01-01

    Introduction: Patiromer is a potassium-binding polymer that is not systemically absorbed; however, it may bind coadministered oral drugs in the gastrointestinal tract, potentially reducing their absorption. Methods: Twelve randomized, open-label, 3-period, 3-sequence crossover studies were conducted in healthy volunteers to evaluate the effect of patiromer (perpetrator drug) on absorption and single-dose pharmacokinetics (PK) of drugs (victims) that might be commonly used with patiromer. Subjects received victim drug alone, victim drug administered together with patiromer 25.2 g (highest approved dose), and victim drug administered 3 hours before patiromer 25.2 g. The primary PK endpoints were area under the curve (AUC), extrapolated to infinity (AUC0-∞), and maximum concentration (C max). Results were reported as 90% confidence intervals (CIs) about the geometric mean AUC0-∞ and C max ratios with prespecified equivalence limits of 80% to 125%. Results: Overall, 370 subjects were enrolled, with 365 receiving ≥1 dose of patiromer; 351 subjects completed the studies and all required treatments. When coadministered with patiromer, the 90% CIs for AUC0-∞ remained within 80% to 125% for 9 drugs (amlodipine, cinacalcet, clopidogrel, furosemide, lithium, metoprolol, trimethoprim, verapamil, and warfarin). The AUC0-∞ point estimate ratios for levothyroxine and metformin with patiromer coadministration were ≥80%, with the lower bounds of the 90% CIs at 76.8% and 72.8%, respectively. For ciprofloxacin, the point estimate for AUC0-∞ was 71.5% (90% CI: 65.3-78.4). For 8 of 12 drugs, point estimates for C max were ≥80% with patiromer coadministration; for ciprofloxacin, clopidogrel, metformin, and metoprolol, the point estimates were <80%. When patiromer was administered 3 hours after each victim drug, the 90% CIs for AUC0-∞ and C max for each drug were within the prespecified 80% to 125% limits. Conclusion: For 9 of the 12 drugs coadministered with patiromer, there were no clinically significant drug–drug interactions. For 3 drugs (ciprofloxacin, levothyroxine, and metformin), a 3-hour separation between patiromer and their administration resulted in no clinically significant drug–drug interactions. PMID:28585859

  6. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  7. Freeze-drying process design by manometric temperature measurement: design of a smart freeze-dryer.

    PubMed

    Tang, Xiaolin Charlie; Nail, Steven L; Pikal, Michael J

    2005-04-01

    To develop a procedure based on manometric temperature measurement (MTM) and an expert system for good practices in freeze drying that will allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment. Freeze drying was performed with a FTS Dura-Stop/Dura-Top freeze dryer with the manometric temperature measurement software installed. Five percent solutions of glycine, sucrose, or mannitol with 2 ml to 4 ml fill in 5 ml vials were used, with all vials loaded on one shelf. Details of freezing, optimization of chamber pressure, target product temperature, and some aspects of secondary drying are determined by the expert system algorithms. MTM measurements were used to select the optimum shelf temperature, to determine drying end points, and to evaluate residual moisture content in real-time. MTM measurements were made at 1 hour or half-hour intervals during primary drying and secondary drying, with a data collection frequency of 4 points per second. The improved MTM equations were fit to pressure-time data generated by the MTM procedure using Microcal Origin software to obtain product temperature and dry layer resistance. Using heat and mass transfer theory, the MTM results were used to evaluate mass and heat transfer rates and to estimate the shelf temperature required to maintain the target product temperature. MTM product dry layer resistance is accurate until about two-thirds of total primary drying time is over, and the MTM product temperature is normally accurate almost to the end of primary drying provided that effective thermal shielding is used in the freeze-drying process. The primary drying times can be accurately estimated from mass transfer rates calculated very early in the run, and we find the target product temperature can be achieved and maintained with only a few adjustments of shelf temperature. The freeze-dryer overload conditions can be estimated by calculation of heat/mass flow at the target product temperature. It was found that the MTM results serve as an excellent indicator of the end point of primary drying. Further, we find that the rate of water desorption during secondary drying may be accurately measured by a variation of the basic MTM procedure. Thus, both the end point of secondary drying and real-time residual moisture may be obtained during secondary drying. Manometric temperature measurement and the expert system for good practices in freeze drying does allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment.

  8. A scalable approach for tree segmentation within small-footprint airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun

    2017-05-01

    This paper presents a distributed approach that scales up to segment tree crowns within a LiDAR point cloud representing an arbitrarily large forested area. The approach uses a single-processor tree segmentation algorithm as a building block in order to process the data delivered in the shape of tiles in parallel. The distributed processing is performed in a master-slave manner, in which the master maintains the global map of the tiles and coordinates the slaves that segment tree crowns within and across the boundaries of the tiles. A minimal bias was introduced to the number of detected trees because of trees lying across the tile boundaries, which was quantified and adjusted for. Theoretical and experimental analyses of the runtime of the approach revealed a near linear speedup. The estimated number of trees categorized by crown class and the associated error margins as well as the height distribution of the detected trees aligned well with field estimations, verifying that the distributed approach works correctly. The approach enables providing information of individual tree locations and point cloud segments for a forest-level area in a timely manner, which can be used to create detailed remotely sensed forest inventories. Although the approach was presented for tree segmentation within LiDAR point clouds, the idea can also be generalized to scale up processing other big spatial datasets.

  9. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  10. Modeling of microclimatic characteristics of highland area

    NASA Astrophysics Data System (ADS)

    Sitdikova, Iuliia; Rusin, Igor

    2013-04-01

    Microclimatic characteristics of highlands may vary considerably over distances of a few meters depending on slope and aspect. There is a problem of estimation of components of surface energy balance based on observation of single stations for description of microclimate highlands. The aim of this paper is to develop a method that would restore microclimatic characteristics of terrain, based on observations of the single station, by physical extrapolation. The input parameters to obtain the microclimatic characteristics are as follows: air temperature, relative humidity, and wind speed on two vertical levels, air pressure, surface temperature, direct and diffused solar radiation and surface albedo. The recent version of the Meteorological Radiation Model (MRM) has been used to calculate a solar radiation over the area and to estimate an influence of cloudiness amounts. The height, slope and aspect were accounted at each point with using a digital elevation model. Have been supposed that air temperature and specific humidity vary with altitude only. Net radiation was calculated at all points of the area. Supposed that the difference between the surface temperature and the air temperature is a linear function of net radiation. The empirical coefficient, which depends on wind speed with adjustment of given area. Latent and sensible fluxes are calculated by using the modified Bowen ratio, which varies on the area. Method was tested on field research in Krasnodar region (RF). The meteorological observations were made every three hour on actinometric and gradient sites. The editional gradient site with different orientation of the slope was organized from 400 meters of the main site. Topographic survey of area was made 1x1,3 km in size for a digital elevation model constructing. At all points of the area of radiation and heat balance were calculated. The results of researches are the maps of surface temperature, net radiation, latent and sensible fluxes. The calculations showed that the average value of components of heat balance by area differ significantly from the data observed on meteorological station.

  11. Comparison of Two Methods for Estimating Adjustable One-Point Cane Length in Community-Dwelling Older Adults.

    PubMed

    Camara, Camila Thais Pinto; de Freitas, Sandra Maria Sbeghen Ferreira; de Lima, Waléria Paixão; Lima, Camila Astolphi; Amorim, César Ferreira; Perracini, Monica Rodrigues

    2017-01-01

    Our aim is to estimate inter-observer reliability, test-retest reliability, anthropometric and biomechanical adequacy and minimal detectable change when measuring the length of single-point adjustable canes in community-dwelling older adults. There are 112 participants in the study. They are men and women, aged 60 years and over, who were attending an outpatient community health centre. An exploratory study design was used. Participants underwent two assessments within the same day by two independent observers and by the same observer at an interval of 15-45 days. Two measures were used to establish the length of a single-point adjustable cane: the distance from the distal wrist crease to the floor (WF) and the distance from the top of the greater trochanter of the femur to the floor (TF). Each individual was fitted according to these two measures, and elbow flexion angle was measured. Inter-observer reliability and the test-retest reliability were high in both TF (ICC 3.1  = 0.918 and ICC 2.1  = 0.935) and WF measures (ICC 3.1  = 0.967 and ICC 2.1  = 0.960). Only 1% of the individuals kept an elbow flexion angle within the standard recommendation of 30° ± 10° when the cane length was determined by the TF measure, and 30% of the participants when the cane was determined by the WF measure. The minimal detectable cane length change was 2.2 cm. Our results suggest that, even though both measures are reliable, cane length determined by WF distance is more appropriate to keep the elbow flexion angle within the standard recommendation. The minimal detectable change corresponds to approximately a hole in the cane adjustment. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Point of zero potential of single-crystal electrode/inert electrolyte interface.

    PubMed

    Zarzycki, Piotr; Preočanin, Tajana

    2012-03-15

    Most of the environmentally important processes occur at the specific hydrated mineral faces. Their rates and mechanisms are in part controlled by the interfacial electrostatics, which can be quantitatively described by the point of zero potential (PZP). Unfortunately, the PZP value of specific crystal face is very difficult to be experimentally determined. Here we show that PZP can be extracted from a single-crystal electrode potentiometric titration, assuming the stable electrochemical cell resistivity and lack of specific electrolyte ions sorption. Our method is based on determining a common intersection point of the electrochemical cell electromotive force at various ionic strengths, and it is illustrated for a few selected surfaces of rutile, hematite, silver chloride, and bromide monocrystals. In the case of metal oxides, we have observed the higher PZP values than those theoretically predicted using the MultiSite Complexation Model (MUSIC), that is, 8.4 for (001) hematite (MUSIC-predicted ~6), 8.7 for (110) rutile (MUSIC-predicted ~6), and about 7 for (001) rutile (MUSIC-predicted 6.6). In the case of silver halides, the order of estimated PZP values (6.4 for AgCl<6.5 for AgBr) agrees well with sequence estimated from the silver halide solubility products; however, the halide anions (Cl(-), Br(-)) are attracted toward surface much stronger than the Ag(+) cations. The observed PZPs sequence and strong anions affinity toward silver halide surface can be correlated with ions hydration energies. Presented approach is the complementary one to the hysteresis method reported previously [P. Zarzycki, S. Chatman, T. Preočanin, K.M. Rosso, Langmuir 27 (2011) 7986-7990]. A unique experimental characterization of specific crystal faces provided by these two methods is essential in deeper understanding of environmentally important processes, including migration of heavy and radioactive ions in soils and groundwaters. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Standard International prognostic index remains a valid predictor of outcome for patients with aggressive CD20+ B-cell lymphoma in the rituximab era.

    PubMed

    Ziepert, Marita; Hasenclever, Dirk; Kuhnt, Evelyn; Glass, Bertram; Schmitz, Norbert; Pfreundschuh, Michael; Loeffler, Markus

    2010-05-10

    The International Prognostic Index (IPI) is widely used for risk stratification of patients with aggressive B-cell lymphoma. The introduction of rituximab has markedly improved outcome, and R-CHOP (rituximab + cyclophosphamide, doxorubicin, vincristine, prednisone) has become the standard treatment for CD20(+) diffuse large B-cell lymphoma. To investigate whether the IPI has maintained its power for risk stratification when rituximab is combined with CHOP, we analyzed the prognostic relevance of IPI in three prospective clinical trials. In total, 1,062 patients treated with rituximab were included (MabThera International Trial [MInT], 380 patients; dose-escalated regimen of cyclophosphamide, doxorubicin, vincristine, etoposide, and prednisone (MegaCHOEP) trial, 72 patients; CHOP + rituximab for patients older than age 60 years [RICOVER-60] trial, 610 patients). A multivariate proportional hazards modeling was performed for single IPI factors under rituximab on event-free, progression-free, and overall survival. IPI score was significant for all three end points. Rituximab significantly improved treatment outcome within each IPI group resulting in a quenching of the Kaplan-Meier estimators. However, IPI was a significant prognostic factor in all three end points and the ordering of the IPI groups remained valid. The relative risk estimates of single IPI factors and their order in patients treated with R-CHOP were similar to those found with CHOP. The effects of rituximab were superimposed on the effects of CHOP with no interactions between chemotherapy and antibody therapy. These results demonstrate that the IPI is still valid in the R-CHOP era.

  14. Integrating Low-Cost Mems Accelerometer Mini-Arrays (mama) in Earthquake Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Nof, R. N.; Chung, A. I.; Rademacher, H.; Allen, R. M.

    2016-12-01

    Current operational Earthquake Early Warning Systems (EEWS) acquire data with networks of single seismic stations, and compute source parameters assuming earthquakes to be point sources. For large events, the point-source assumption leads to an underestimation of magnitude, and the use of single stations leads to large uncertainties in the locations of events outside the network. We propose the use of mini-arrays to improve EEWS. Mini-arrays have the potential to: (a) estimate reliable hypocentral locations by beam forming (FK-analysis) techniques; (b) characterize the rupture dimensions and account for finite-source effects, leading to more reliable estimates for large magnitudes. Previously, the high price of multiple seismometers has made creating arrays cost-prohibitive. However, we propose setting up mini-arrays of a new seismometer based on low-cost (<$150), high-performance MEMS accelerometer around conventional seismic stations. The expected benefits of such an approach include decreasing alert-times, improving real-time shaking predictions and mitigating false alarms. We use low-resolution 14-bit Quake Catcher Network (QCN) data collected during Rapid Aftershock Mobilization Program (RAMP) in Christchurch, NZ following the M7.1 Darfield earthquake in September 2010. As the QCN network was so dense, we were able to use small sub-array of up to ten sensors spread along a maximum area of 1.7x2.2 km2 to demonstrate our approach and to solve for the BAZ of two events (Mw4.7 and Mw5.1) with less than ±10° error. We will also present the new 24-bit device details, benchmarks, and real-time measurements.

  15. Functional Multi-Locus QTL Mapping of Temporal Trends in Scots Pine Wood Traits

    PubMed Central

    Li, Zitong; Hallingbäck, Henrik R.; Abrahamsson, Sara; Fries, Anders; Gull, Bengt Andersson; Sillanpää, Mikko J.; García-Gil, M. Rosario

    2014-01-01

    Quantitative trait loci (QTL) mapping of wood properties in conifer species has focused on single time point measurements or on trait means based on heterogeneous wood samples (e.g., increment cores), thus ignoring systematic within-tree trends. In this study, functional QTL mapping was performed for a set of important wood properties in increment cores from a 17-yr-old Scots pine (Pinus sylvestris L.) full-sib family with the aim of detecting wood trait QTL for general intercepts (means) and for linear slopes by increasing cambial age. Two multi-locus functional QTL analysis approaches were proposed and their performances were compared on trait datasets comprising 2 to 9 time points, 91 to 455 individual tree measurements and genotype datasets of amplified length polymorphisms (AFLP), and single nucleotide polymorphism (SNP) markers. The first method was a multilevel LASSO analysis whereby trend parameter estimation and QTL mapping were conducted consecutively; the second method was our Bayesian linear mixed model whereby trends and underlying genetic effects were estimated simultaneously. We also compared several different hypothesis testing methods under either the LASSO or the Bayesian framework to perform QTL inference. In total, five and four significant QTL were observed for the intercepts and slopes, respectively, across wood traits such as earlywood percentage, wood density, radial fiberwidth, and spiral grain angle. Four of these QTL were represented by candidate gene SNPs, thus providing promising targets for future research in QTL mapping and molecular function. Bayesian and LASSO methods both detected similar sets of QTL given datasets that comprised large numbers of individuals. PMID:25305041

  16. Functional multi-locus QTL mapping of temporal trends in Scots pine wood traits.

    PubMed

    Li, Zitong; Hallingbäck, Henrik R; Abrahamsson, Sara; Fries, Anders; Gull, Bengt Andersson; Sillanpää, Mikko J; García-Gil, M Rosario

    2014-10-09

    Quantitative trait loci (QTL) mapping of wood properties in conifer species has focused on single time point measurements or on trait means based on heterogeneous wood samples (e.g., increment cores), thus ignoring systematic within-tree trends. In this study, functional QTL mapping was performed for a set of important wood properties in increment cores from a 17-yr-old Scots pine (Pinus sylvestris L.) full-sib family with the aim of detecting wood trait QTL for general intercepts (means) and for linear slopes by increasing cambial age. Two multi-locus functional QTL analysis approaches were proposed and their performances were compared on trait datasets comprising 2 to 9 time points, 91 to 455 individual tree measurements and genotype datasets of amplified length polymorphisms (AFLP), and single nucleotide polymorphism (SNP) markers. The first method was a multilevel LASSO analysis whereby trend parameter estimation and QTL mapping were conducted consecutively; the second method was our Bayesian linear mixed model whereby trends and underlying genetic effects were estimated simultaneously. We also compared several different hypothesis testing methods under either the LASSO or the Bayesian framework to perform QTL inference. In total, five and four significant QTL were observed for the intercepts and slopes, respectively, across wood traits such as earlywood percentage, wood density, radial fiberwidth, and spiral grain angle. Four of these QTL were represented by candidate gene SNPs, thus providing promising targets for future research in QTL mapping and molecular function. Bayesian and LASSO methods both detected similar sets of QTL given datasets that comprised large numbers of individuals. Copyright © 2014 Li et al.

  17. Application of Modern Design of Experiments to CARS Thermometry in a Model Scramjet Engine

    NASA Technical Reports Server (NTRS)

    Danehy, P. M.; DeLoach, R.; Cutler, A. D.

    2002-01-01

    We have applied formal experiment design and analysis to optimize the measurement of temperature in a supersonic combustor at NASA Langley Research Center. We used the coherent anti-Stokes Raman spectroscopy (CARS) technique to map the temperature distribution in the flowfield downstream of an 1160 K, Mach 2 freestream into which supersonic hydrogen fuel is injected at an angle of 30 degrees. CARS thermometry is inherently a single-point measurement technique; it was used to map thc flow by translating the measurement volume through the flowfield. The method known as "Modern Design of Experiments" (MDOE) was used to estimate the data volume required, design the test matrix, perform the experiment and analyze the resulting data. MDOE allowed us to match the volume of data acquired to the precision requirements of the customer. Furthermore, one aspect of MDOE, known as response surface methodology, allowed us to develop precise maps of the flowfield temperature, allowing interpolation between measurement points. An analytic function in two spatial variables was fit to the data from a single measurement plane. Fitting with a Cosine Series Bivariate Function allowed the mean temperature to be mapped with 95% confidence interval half-widths of +/- 30 Kelvin, comfortably meeting the confidence of +/- 50 Kelvin specified prior to performing the experiments. We estimate that applying MDOE to the present experiment saved a factor of 5 in data volume acquired, compared to experiments executed in the traditional manner. Furthermore, the precision requirements could have been met with less than half the data acquired.

  18. NASA Tech Briefs, March 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Advanced Systems for Monitoring Underwater Sounds; Wireless Data-Acquisition System for Testing Rocket Engines; Processing Raw HST Data With Up-to-Date Calibration Data; Mobile Collection and Automated Interpretation of EEG Data; System for Secure Integration of Aviation Data; Servomotor and Controller Having Large Dynamic Range; Digital Multicasting of Multiple Audio Streams; Translator for Optimizing Fluid-Handling Components; AIRSAR Web-Based Data Processing; Pattern Matcher for Trees Constructed From Lists; Reducing a Knowledge-Base Search Space When Data Are Missing; Ground-Based Correction of Remote-Sensing Spectral Imagery; State-Chart Autocoder; Pointing History Engine for the Spitzer Space Telescope; Low-Friction, High-Stiffness Joint for Uniaxial Load Cell; Magnet-Based System for Docking of Miniature Spacecraft; Electromechanically Actuated Valve for Controlling Flow Rate; Plumbing Fixture for a Microfluidic Cartridge; Camera Mount for a Head-Up Display; Core-Cutoff Tool; Recirculation of Laser Power in an Atomic Fountain; Simplified Generation of High-Angular-Momentum Light Beams; Imaging Spectrometer on a Chip; Interferometric Quantum-Nondemolition Single-Photon Detectors; Ring-Down Spectroscopy for Characterizing a CW Raman Laser; Complex Type-II Interband Cascade MQW Photodetectors; Single-Point Access to Data Distributed on Many Processors; Estimating Dust and Water Ice Content of the Martian Atmosphere From THEMIS Data; Computing a Stability Spectrum by Use of the HHT; Theoretical Studies of Routes to Synthesis of Tetrahedral N4; Estimation Filter for Alignment of the Spitzer Space Telescope; Antenna for Measuring Electric Fields Within the Inner Heliosphere; Improved High-Voltage Gas Isolator for Ion Thruster; and Hybrid Mobile Communication Networks for Planetary Exploration.

  19. A simple and rapid method for high-resolution visualization of single-ion tracks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omichi, Masaaki; Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017; Choi, Wookjin

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic forcemore » microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.« less

  20. Trajectory prediction for ballistic missiles based on boost-phase LOS measurements

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov

    1997-10-01

    This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.

  1. Improved background rejection in neutrinoless double beta decay experiments using a magnetic field in a high pressure xenon TPC

    NASA Astrophysics Data System (ADS)

    Renner, J.; Cervera, A.; Hernando, J. A.; Imzaylov, A.; Monrabal, F.; Muñoz, J.; Nygren, D.; Gomez-Cadenas, J. J.

    2015-12-01

    We demonstrate that the application of an external magnetic field could lead to an improved background rejection in neutrinoless double-beta (0νββ) decay experiments using a high-pressure xenon (HPXe) TPC. HPXe chambers are capable of imaging electron tracks, a feature that enhances the separation between signal events (the two electrons emitted in the 0νββ decay of 136Xe) and background events, arising chiefly from single electrons of kinetic energy compatible with the end-point of the 0νββ decay (0Qββ). Applying an external magnetic field of sufficiently high intensity (in the range of 0.5-1 Tesla for operating pressures in the range of 5-15 atmospheres) causes the electrons to produce helical tracks. Assuming the tracks can be properly reconstructed, the sign of the curvature can be determined at several points along these tracks, and such information can be used to separate signal (0νββ) events containing two electrons producing a track with two different directions of curvature from background (single-electron) events producing a track that should spiral in a single direction. Due to electron multiple scattering, this strategy is not perfectly efficient on an event-by-event basis, but a statistical estimator can be constructed which can be used to reject background events by one order of magnitude at a moderate cost (about 30%) in signal efficiency. Combining this estimator with the excellent energy resolution and topological signature identification characteristic of the HPXe TPC, it is possible to reach a background rate of less than one count per ton-year of exposure. Such a low background rate is an essential feature of the next generation of 0νββ experiments, aiming to fully explore the inverse hierarchy of neutrino masses.

  2. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  3. 3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis

    NASA Astrophysics Data System (ADS)

    Herfort, Benjamin; Höfle, Bernhard; Klonner, Carolin

    2018-03-01

    In this paper, we propose a method to crowdsource the task of complex three-dimensional information extraction from 3D point clouds. We design web-based 3D micro tasks tailored to assess segmented LiDAR point clouds of urban trees and investigate the quality of the approach in an empirical user study. Our results for three different experiments with increasing complexity indicate that a single crowdsourcing task can be solved in a very short time of less than five seconds on average. Furthermore, the results of our empirical case study reveal that the accuracy, sensitivity and precision of 3D crowdsourcing are high for most information extraction problems. For our first experiment (binary classification with single answer) we obtain an accuracy of 91%, a sensitivity of 95% and a precision of 92%. For the more complex tasks of the second Experiment 2 (multiple answer classification) the accuracy ranges from 65% to 99% depending on the label class. Regarding the third experiment - the determination of the crown base height of individual trees - our study highlights that crowdsourcing can be a tool to obtain values with even higher accuracy in comparison to an automated computer-based approach. Finally, we found out that the accuracy of the crowdsourced results for all experiments is hardly influenced by characteristics of the input point cloud data and of the users. Importantly, the results' accuracy can be estimated using agreement among volunteers as an intrinsic indicator, which makes a broad application of 3D micro-mapping very promising.

  4. Soft tissue topography and dimensions lateral to single implant-supported restorations. a cross-sectional study.

    PubMed

    Chang, Moontaek; Wennström, Jan L

    2013-05-01

    The aim was to evaluate potential relationships between the implant position relative to adjacent teeth and dimensions and topography of the papillae lateral to implant-supported single-tooth restorations. A total of 32 subjects with a single implant-supported restoration in the esthetic zone of the maxilla were consecutively selected for the study. Soft and hard tissues at the proximal sites of the restoration were evaluated by the use of clinical, photographic, diagnostic cast, and radiographic assessments. A questionnaire was used for assessment of the patients' satisfaction with the esthetic outcome of the restorations. Potential factors influencing the papilla level and the presence of a complete papilla fill were investigated with generalized estimation equations (GEE) analysis. The bone level at the adjacent tooth significantly influenced the papilla level (P < 0.001). The distance between the contact point and the bone level at the adjacent tooth was significantly shorter for "complete" (4.3 mm) papillae than that for "deficient" papillae (5.7 mm) (P < 0.001). The GEE logistic model revealed that the chance of a complete papilla fill improved with increased facio-lingual thickness of the papilla (P = 0.004) and decreased distance between the contact point and the bone level at the tooth (P = 0.004). The self-reported satisfaction with the esthetic appearance of the implant-born restoration was not significantly different between patients with "complete" and "deficient" papillae. The probability of a complete papilla fill was significantly affected by the facio-lingual dimension of the papilla base and the distance between the contact point between the crowns and the bone level at the tooth. © 2012 John Wiley & Sons A/S.

  5. Phasor based single-molecule localization microscopy in 3D (pSMLM-3D): An algorithm for MHz localization rates using standard CPUs

    NASA Astrophysics Data System (ADS)

    Martens, Koen J. A.; Bader, Arjen N.; Baas, Sander; Rieger, Bernd; Hohlbein, Johannes

    2018-03-01

    We present a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available. Our algorithm converts the region of interest around a point spread function to two phase vectors (phasors) by calculating the first Fourier coefficients in both the x- and y-direction. The angles of these phasors are used to localize the center of the single fluorescent emitter, and the ratio of the magnitudes of the two phasors is a measure for astigmatism, which can be used to obtain depth information (z-direction). Our approach can be used both as a stand-alone algorithm for maximizing localization speed and as a first estimator for more time consuming iterative algorithms.

  6. Optical, mechanical and thermal behaviors of Nitrilotriacetic acid single crystal

    NASA Astrophysics Data System (ADS)

    Deepa, B.; Philominathan, P.

    2017-11-01

    An organic nonlinear single crystal of Nitrilotriacetic acid (NTAA) was grown for the first time by employing a simple slow evaporation technique. Single crystal X-ray diffraction (XRD) analysis reveals that the grown crystal belongs to the monoclinic system with noncentrosymmetric space group CC. Fourier transform infrared (FTIR) spectral study ascertains the presence of functional groups in NTAA. The molecular structure of the grown crystal was confirmed by Nuclear Magnetic Resonance (NMR) spectral analysis. The optical parameters such as transmittance, absorption coefficient and band gap were calculated from UV-Visible and fluorescence studies. Dielectric measurements were carried out for different frequency and temperature. The mechanical strength of the grown crystal was measured using Vickers microhardness test. The high thermal stability and the melting point of the grown crystal were also estimated using thermogravimetric (TGA) and differential thermal analyses (DTA). The confirmation of the grown crystals belonging to nonlinear optical crystals was performed by Kurtz-Perry technique and found as suitable candidate for optoelectronics applications.

  7. Validation and application of single breath cardiac output determinations in man

    NASA Technical Reports Server (NTRS)

    Loeppky, J. A.; Fletcher, E. R.; Myhre, L. G.; Luft, U. C.

    1986-01-01

    The results of a procedure for estimating cardiac output by a single-breath technique (Qsb), obtained in healthy males during supine rest and during exercise on a bicycle ergometer, were compared with the results on cardiac output obtained by the direct Fick method (QF). The single breath maneuver consisted of a slow exhalation to near residual volume following an inspiration somewhat deeper than normal. The Qsb calculations incorporated an equation of the CO2 dissociation curve and a 'moving spline' sequential curve-fitting technique to calculate the instantaneous R from points on the original expirogram. The resulting linear regression equation indicated a 24-percent underestimation of QF by the Qsb technique. After applying a correction, the Qsb-QF relationship was improved. A subsequent study during upright rest and exercise to 80 percent of VO2(max) in 6 subjects indicated a close linear relationship between Qsb and VO2 for all 95 values obtained, with slope and intercept close to those in published studies in which invasive cardiac output measurements were used.

  8. Electromagnetic Vortex-Based Radar Imaging Using a Single Receiving Antenna: Theory and Experimental Results

    PubMed Central

    Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang

    2017-01-01

    Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487

  9. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: Pumas as a case study

    Treesearch

    Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce

    2014-01-01

    Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...

  10. Shape information from a critical point analysis of calculated electron density maps: application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, L.; Allen, F. H.; Vercauteren, D. P.

    1995-04-01

    A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.

  11. Shape information from a critical point analysis of calculated electron density maps: Application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, Laurence; Allen, Frank H.

    1994-06-01

    A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.

  12. Combining statistical inference and decisions in ecology.

    PubMed

    Williams, Perry J; Hooten, Mevin B

    2016-09-01

    Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods, including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem. © 2016 by the Ecological Society of America.

  13. Localization and tracking of moving objects in two-dimensional space by echolocation.

    PubMed

    Matsuo, Ikuo

    2013-02-01

    Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.

  14. Eigenspace perturbations for uncertainty estimation of single-point turbulence closures

    NASA Astrophysics Data System (ADS)

    Iaccarino, Gianluca; Mishra, Aashwin Ananda; Ghili, Saman

    2017-02-01

    Reynolds-averaged Navier-Stokes (RANS) models represent the workhorse for predicting turbulent flows in complex industrial applications. However, RANS closures introduce a significant degree of epistemic uncertainty in predictions due to the potential lack of validity of the assumptions utilized in model formulation. Estimating this uncertainty is a fundamental requirement for building confidence in such predictions. We outline a methodology to estimate this structural uncertainty, incorporating perturbations to the eigenvalues and the eigenvectors of the modeled Reynolds stress tensor. The mathematical foundations of this framework are derived and explicated. Thence, this framework is applied to a set of separated turbulent flows, while compared to numerical and experimental data and contrasted against the predictions of the eigenvalue-only perturbation methodology. It is exhibited that for separated flows, this framework is able to yield significant enhancement over the established eigenvalue perturbation methodology in explaining the discrepancy against experimental observations and high-fidelity simulations. Furthermore, uncertainty bounds of potential engineering utility can be estimated by performing five specific RANS simulations, reducing the computational expenditure on such an exercise.

  15. Use of capillary blood glucose for screening for gestational diabetes mellitus in resource-constrained settings.

    PubMed

    Bhavadharini, Balaji; Mahalakshmi, Manni Mohanraj; Maheswari, Kumar; Kalaiyarasi, Gunasekaran; Anjana, Ranjit Mohan; Deepa, Mohan; Ranjani, Harish; Priya, Miranda; Uma, Ram; Usha, Sriram; Pastakia, Sonak D; Malanda, Belma; Belton, Anne; Unnikrishnan, Ranjit; Kayal, Arivudainambi; Mohan, Viswanathan

    2016-02-01

    The aim of the study was to evaluate usefulness of capillary blood glucose (CBG) for diagnosis of gestational diabetes mellitus (GDM) in resource-constrained settings where venous plasma glucose (VPG) estimations may be impossible. Consecutive pregnant women (n = 1031) attending antenatal clinics in southern India underwent 75-g oral glucose tolerance test (OGTT). Fasting, 1- and 2-h VPG (AU2700 Beckman, Fullerton, CA) and CBG (One Touch Ultra-II, LifeScan) were simultaneously measured. Sensitivity and specificity were estimated for different CBG cut points using the International Association of Diabetes in Pregnancy Study Groups (IADPSG) criteria for the diagnosis of GDM as gold standard. Bland-Altman plots were drawn to look at the agreement between CBG and VPG. Correlation and regression equation analysis were also derived for CBG values. Pearson's correlation between VPG and CBG for fasting was r = 0.433 [intraclass correlation coefficient (ICC) = 0.596, p < 0.001], for 1H, it was r = 0.653 (ICC = 0.776, p < 0.001), and for 2H, r = 0.784 (ICC = 0.834, p < 0.001). Comparing a single CBG 2-h cut point of 140 mg/dl (7.8 mmol/l) with the IADPSG criteria, the sensitivity and specificity were 62.3 and 80.7 %, respectively. If CBG cut points of 120 mg/dl (6.6 mmol/l) or 110 mg/dl (6.1 mmol/l) were used, the sensitivity improves to 78.3 and 92.5 %, respectively. In settings where VPG estimations are not possible, CBG can be used as an initial screening test for GDM, using lower 2H CBG cut points to maximize the sensitivity. Those who screen positive can be referred to higher centers for definitive testing, using VPG.

  16. Development and Validation of Limited-Sampling Strategies for Predicting Amoxicillin Pharmacokinetic and Pharmacodynamic Parameters

    PubMed Central

    Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.

    2001-01-01

    Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352

  17. Purification and characterization of ornithine transcarbamylase from pea (Pisum sativum L.)

    NASA Technical Reports Server (NTRS)

    Slocum, R. D.; Richardson, D. P.

    1991-01-01

    Pea (Pisum sativum) ornithine transcarbamylase (OTC) was purified to homogeneity from leaf homogenates in a single-step procedure, using delta-N-(phosphonacetyl)-L-ornithine-Sepharose 6B affinity chromatography. The 1581-fold purified OTC enzyme exhibited a specific activity of 139 micromoles citrulline per minute per milligram of protein at 37 degrees C, pH 8.5. Pea OTC represents approximately 0.05% of the total soluble protein in the leaf. The molecular weight of the native enzyme was approximately 108,200, as estimated by Sephacryl S-200 gel filtration chromatography. The purified protein ran as a single molecular weight band of 36,500 in sodium dodecyl sulfate-polyacrylamide gel electrophoresis. These results suggest that the pea OTC is a trimer of identical subunits. The overall amino acid composition of pea OTC is similar to that found in other eukaryotic and prokaryotic OTCs, but the number of arginine residues is approximately twofold higher. The increased number of arginine residues probably accounts for the observed isoelectric point of 7.6 for the pea enzyme, which is considerably more basic than isoelectric point values that have been reported for other OTCs.

  18. Quantitative assessment of dynamic PET imaging data in cancer imaging.

    PubMed

    Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E

    2012-11-01

    Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  20. Human variability in mercury toxicokinetics and steady state biomarker ratios.

    PubMed

    Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M

    2000-10-01

    Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.

  1. What is a meaningful change in physical performance? Findings from a clinical trial in older adults (the LIFE-P study).

    PubMed

    Kwon, S; Perera, S; Pahor, M; Katula, J A; King, A C; Groessl, E J; Studenski, S A

    2009-06-01

    Performance measures provide important information, but the meaning of change in these measures is not well known. The purpose of this research is to 1) examine the effect of treatment assignment on the relationship between self-report and performance; 2) to estimate the magnitude of meaningful change in 400-meter walk time (400MWT), 4-meter gait speed (4MGS), and Short Physical Performance Battery (SPPB) and 3) to evaluate the effect of direction of change on estimates of magnitude. This is a secondary analysis of data from the LIFE-P study, a single blinded randomized clinical trial. Using change over one year, we applied distribution-based and anchor-based methods for self-reported mobility to estimate minimally important and substantial change in 400MWT, 4MGS and SPPB. Four university-based clinical research sites. Sedentary adults aged 70-89 whose SPPB scores were less than 10 and who were able to complete a 400MW at baseline (n=424). A structured exercise program versus health education. 400MWT, 4MGS, SPPB. Relationships between self-report and performance measures were consistent between treatment arms. Minimally significant change estimates were 400MWT: 20-30 seconds, 4MGS: 0.03-0.05m/s and SPPB: 0.3 - 0.8 points. Substantial changes were 400MWT: 50-60 seconds, 4MGS: 0.08m/s, SPPB: 0.4 - 1.5 points. Magnitudes of change for improvement and decline were not significantly different. The magnitude of clinically important change in physical performance measures is reasonably consistent using several analytic techniques and appears to be achievable in clinical trials of exercise. Due to limited power, the effect of direction of change on estimates of magnitude remains uncertain.

  2. Simultaneous, accurate measurement of the 3D position and orientation of single molecules

    PubMed Central

    Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.

    2012-01-01

    Recently, single molecule-based superresolution fluorescence microscopy has surpassed the diffraction limit to improve resolution to the order of 20 nm or better. These methods typically use image fitting that assumes an isotropic emission pattern from the single emitters as well as control of the emitter concentration. However, anisotropic single-molecule emission patterns arise from the transition dipole when it is rotationally immobile, depending highly on the molecule’s 3D orientation and z position. Failure to account for this fact can lead to significant lateral (x, y) mislocalizations (up to ∼50–200 nm). This systematic error can cause distortions in the reconstructed images, which can translate into degraded resolution. Using parameters uniquely inherent in the double-lobed nature of the Double-Helix Point Spread Function, we account for such mislocalizations and simultaneously measure 3D molecular orientation and 3D position. Mislocalizations during an axial scan of a single molecule manifest themselves as an apparent lateral shift in its position, which causes the standard deviation (SD) of its lateral position to appear larger than the SD expected from photon shot noise. By correcting each localization based on an estimated orientation, we are able to improve SDs in lateral localization from ∼2× worse than photon-limited precision (48 vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation over different depths, we are able to improve from a lateral SD of 116 (∼4× worse than the photon-limited precision; 28 nm) to 34 nm (within 6 nm of the photon limit). PMID:23129640

  3. Investigating the Small-Scale Spatial Variabilty of Precipitable Water Vapor by Adding Single-Frequency Receivers into an Existing Dual-Frequency Receiver Network

    NASA Astrophysics Data System (ADS)

    Krietemeyer, Andreas; ten Veldhuis, Marie-claire; van de Giesen, Nick

    2017-04-01

    Exploiting GNSS signal delays is one possibility to obtain Precipitable Water Vapor (PWV) estimates in the atmosphere. The technique is well known since the early 1990s and by now an established method in the meteorological community. The data is crucial for weather forecasting and its assimilation into numerical weather forecasting models is a topic of ongoing research. However, the spatial resolution of ground based GNSS receivers is usually low, in the order of tens of kilometres. Since severe weather events such as convective storms can be concentrated in spatial extent, existing GNSS networks are often not sufficient to retrieve small scale PWV fluctuations and need to be densified. For economic reasons, the use of low-cost single-frequency receivers is a promising solution. In this study, we will deploy a network of single-frequency receivers to densify an existing dual-frequency network in order to investigate the spatial and temporal PWV variations. We demonstrate a test network consisting of four single-frequency receivers in the Rotterdam area (Netherlands). In order to eliminate the delay caused by the ionosphere, the Satellite-specific Epoch-differenced Ionospheric Delay model (SEID) is applied, using a surrounding dual-frequency network distributed over a radius of approximately 25 km. With the synthesized L2 frequency, the tropospheric delays are estimated using the Precise Point Positioning (PPP) strategy and International GNSS Service (IGS) final orbits. The PWV time series are validated by a comparison of a collocated single-frequency and a dual-frequency receiver. The time series themselves form the basis for potential further studies like data assimilation into numerical weather models and GNSS tomography to study the impact of the increased spatial resolution on local heavy rain forecast.

  4. PACIC Instrument: disentangling dimensions using published validation models.

    PubMed

    Iglesias, K; Burnand, B; Peytremann-Bridevaux, I

    2014-06-01

    To better understand the structure of the Patient Assessment of Chronic Illness Care (PACIC) instrument. More specifically to test all published validation models, using one single data set and appropriate statistical tools. Validation study using data from cross-sectional survey. A population-based sample of non-institutionalized adults with diabetes residing in Switzerland (canton of Vaud). French version of the 20-items PACIC instrument (5-point response scale). We conducted validation analyses using confirmatory factor analysis (CFA). The original five-dimension model and other published models were tested with three types of CFA: based on (i) a Pearson estimator of variance-covariance matrix, (ii) a polychoric correlation matrix and (iii) a likelihood estimation with a multinomial distribution for the manifest variables. All models were assessed using loadings and goodness-of-fit measures. The analytical sample included 406 patients. Mean age was 64.4 years and 59% were men. Median of item responses varied between 1 and 4 (range 1-5), and range of missing values was between 5.7 and 12.3%. Strong floor and ceiling effects were present. Even though loadings of the tested models were relatively high, the only model showing acceptable fit was the 11-item single-dimension model. PACIC was associated with the expected variables of the field. Our results showed that the model considering 11 items in a single dimension exhibited the best fit for our data. A single score, in complement to the consideration of single-item results, might be used instead of the five dimensions usually described. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  5. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method gives a more varied distribution of SM than those derived from TDR measurements. It should be noted that reducing the number of samples in the measuring grid leads to flattening the distribution of SM from both methods and increasing the estimation error at the same time. Grid of sensors for permanent measurement points should include points that have similar distributions of SM in the vicinity. Results of the analysis including number, the maximum correlation ranges and the acceptable estimation error should be taken into account when choosing of the measurement points. Adoption or possible adjustment of the distribution of the measurement points should be verified by performing additional measuring campaigns during the dry and wet periods. Presented approach seems to be appropriate for creation of regional-scale test (super) sites, to validate products of satellites equipped with SAR (Synthetic Aperture Radar), operating in C-band, with spatial resolution suited to single field scale, as for example: ERS-1, ERS-2, Radarsat and Sentinel-1, which is going to be launched in next few months. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.

  6. Estimation of single-year-of-age counts of live births, fetal losses, abortions, and pregnant women for counties of Texas.

    PubMed

    Singh, Bismark; Meyers, Lauren Ancel

    2017-05-08

    We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.

  7. 40 CFR 29.9 - How does the Administrator receive and respond to comments?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... State office or official is designated to act as a single point of contact between a State process and... program selected under § 29.6. (b) The single point of contact is not obligated to transmit comments from.... However, if a State process recommendation is transmitted by a single point of contact, all comments from...

  8. Grid mapping: a novel method of signal quality evaluation on a single lead electrocardiogram.

    PubMed

    Li, Yanjun; Tang, Xiaoying

    2017-12-01

    Diagnosis of long-term electrocardiogram (ECG) calls for automatic and accurate methods of ECG signal quality estimation, not only to lighten the burden of the doctors but also to avoid misdiagnoses. In this paper, a novel waveform-based method of phase-space reconstruction for signal quality estimation on a single lead ECG was proposed by projecting the amplitude of the ECG and its first order difference into grid cells. The waveform of a single lead ECG was divided into non-overlapping episodes (T s  = 10, 20, 30 s), and the number of grids in both the width and the height of each map are in the range [20, 100] (N X  = N Y  = 20, 30, 40, … 90, 100). The blank pane ratio (BPR) and the entropy were calculated from the distribution of ECG sampling points which were projected into the grid cells. Signal Quality Indices (SQI) bSQI and eSQI were calculated according to the BPR and the entropy, respectively. The MIT-BIH Noise Stress Test Database was used to test the performance of bSQI and eSQI on ECG signal quality estimation. The signal-to-noise ratio (SNR) during the noisy segments of the ECG records in the database is 24, 18, 12, 6, 0 and - 6 dB, respectively. For the SQI quantitative analysis, the records were divided into three groups: good quality group (24, 18 dB), moderate group (12, 6 dB) and bad quality group (0, - 6 dB). The classification among good quality group, moderate quality group and bad quality group were made by linear support-vector machine with the combination of the BPR, the entropy, the bSQI and the eSQI. The classification accuracy was 82.4% and the Cohen's Kappa coefficient was 0.74 on a scale of N X  = 40 and T s  = 20 s. In conclusion, the novel grid mapping offers an intuitive and simple approach to achieving signal quality estimation on a single lead ECG.

  9. Background-Free 3D Nanometric Localization and Sub-nm Asymmetry Detection of Single Plasmonic Nanoparticles by Four-Wave Mixing Interferometry with Optical Vortices

    NASA Astrophysics Data System (ADS)

    Zoriniants, George; Masia, Francesco; Giannakopoulou, Naya; Langbein, Wolfgang; Borri, Paola

    2017-10-01

    Single nanoparticle tracking using optical microscopy is a powerful technique with many applications in biology, chemistry, and material sciences. Despite significant advances, localizing objects with nanometric position precision in a scattering environment remains challenging. Applied methods to achieve contrast are dominantly fluorescence based, with fundamental limits in the emitted photon fluxes arising from the excited-state lifetime as well as photobleaching. Here, we show a new four-wave-mixing interferometry technique, whereby the position of a single nonfluorescing gold nanoparticle of 25-nm radius is determined with 16 nm precision in plane and 3 nm axially from rapid single-point measurements at 1-ms acquisition time by exploiting optical vortices. The precision in plane is consistent with the photon shot-noise, while axially it is limited by the nano-positioning sample stage, with an estimated photon shot-noise limit of 0.5 nm. The detection is background-free even inside biological cells. The technique is also uniquely sensitive to particle asymmetries of only 0.5% ellipticity, corresponding to a single atomic layer of gold, as well as particle orientation. This method opens new ways of unraveling single-particle trafficking within complex 3D architectures.

  10. Temperature-dependent thermal conductivity and diffusivity of a Mg-doped insulating β-Ga2O3 single crystal along [100], [010] and [001

    NASA Astrophysics Data System (ADS)

    Handwerg, M.; Mitdank, R.; Galazka, Z.; Fischer, S. F.

    2016-12-01

    The monoclinic crystal structure of β-{{Ga}}2{{{O}}}3 leads to significant anisotropy of the thermal properties. The 2ω-method is used to measure the thermal diffusivity D in [010] and [001] direction respectively and to determine the thermal conductivity values λ of the [100], [010] and [001] direction from the same insulating Mg-doped β-{{Ga}}2{{{O}}}3 single crystal. We detect a temperature independent anisotropy factor of both the thermal diffusivity and conductivity values of {D}[010]/{D}[001]={λ }[010]/{λ }[001]=1.4+/- 0.1. The temperature dependence is in accord with phonon-phonon-Umklapp-scattering processes from 300 K down to 150 K. Below 150 K point-defect-scattering lowers the estimated phonon-phonon-Umklapp-scattering values.

  11. Single Station System and Method of Locating Lightning Strikes

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Starr, Stanley O. (Inventor)

    2003-01-01

    An embodiment of the present invention uses a single detection system to approximate a location of lightning strikes. This system is triggered by a broadband RF detector and measures a time until the arrival of a leading edge of the thunder acoustic pulse. This time difference is used to determine a slant range R from the detector to the closest approach of the lightning. The azimuth and elevation are determined by an array of acoustic sensors. The leading edge of the thunder waveform is cross-correlated between the various acoustic sensors in the array to determine the difference in time of arrival, AT. A set of AT S is used to determine the direction of arrival, AZ and EL. The three estimated variables (R, AZ, EL) are used to locate a probable point of the lightning strike.

  12. Multi-species genetic connectivity in a terrestrial habitat network.

    PubMed

    Marrotte, Robby R; Bowman, Jeff; Brown, Michael G C; Cordes, Chad; Morris, Kimberley Y; Prentice, Melanie B; Wilson, Paul J

    2017-01-01

    Habitat fragmentation reduces genetic connectivity for multiple species, yet conservation efforts tend to rely heavily on single-species connectivity estimates to inform land-use planning. Such conservation activities may benefit from multi-species connectivity estimates, which provide a simple and practical means to mitigate the effects of habitat fragmentation for a larger number of species. To test the validity of a multi-species connectivity model, we used neutral microsatellite genetic datasets of Canada lynx ( Lynx canadensis ), American marten ( Martes americana ), fisher ( Pekania pennanti ), and southern flying squirrel ( Glaucomys volans ) to evaluate multi-species genetic connectivity across Ontario, Canada. We used linear models to compare node-based estimates of genetic connectivity for each species to point-based estimates of landscape connectivity (current density) derived from circuit theory. To our knowledge, we are the first to evaluate current density as a measure of genetic connectivity. Our results depended on landscape context: habitat amount was more important than current density in explaining multi-species genetic connectivity in the northern part of our study area, where habitat was abundant and fragmentation was low. In the south however, where fragmentation was prevalent, genetic connectivity was correlated with current density. Contrary to our expectations however, locations with a high probability of movement as reflected by high current density were negatively associated with gene flow. Subsequent analyses of circuit theory outputs showed that high current density was also associated with high effective resistance, underscoring that the presence of pinch points is not necessarily indicative of gene flow. Overall, our study appears to provide support for the hypothesis that landscape pattern is important when habitat amount is low. We also conclude that while current density is proportional to the probability of movement per unit area, this does not imply increased gene flow, since high current density tends to be a result of neighbouring pixels with high cost of movement (e.g., low habitat amount). In other words, pinch points with high current density appear to constrict gene flow.

  13. Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.

    PubMed

    Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael

    2017-12-01

    Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.

  14. Bayesian structural inference for hidden processes.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  15. Bayesian structural inference for hidden processes

    NASA Astrophysics Data System (ADS)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  16. Trackline and point detection probabilities for acoustic surveys of Cuvier's and Blainville's beaked whales.

    PubMed

    Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha

    2013-09-01

    Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.

  17. Using frequency response functions to manage image degradation from equipment vibration in the Daniel K. Inouye Solar Telescope

    NASA Astrophysics Data System (ADS)

    McBride, William R.; McBride, Daniel R.

    2016-08-01

    The Daniel K Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, providing a significant increase in the resolution of solar data available to the scientific community. Vibration mitigation is critical in long focal-length telescopes such as the Inouye Solar Telescope, especially when adaptive optics are employed to correct for atmospheric seeing. For this reason, a vibration error budget has been implemented. Initially, the FRFs for the various mounting points of ancillary equipment were estimated using the finite element analysis (FEA) of the telescope structures. FEA analysis is well documented and understood; the focus of this paper is on the methods involved in estimating a set of experimental (measured) transfer functions of the as-built telescope structure for the purpose of vibration management. Techniques to measure low-frequency single-input-single-output (SISO) frequency response functions (FRF) between vibration source locations and image motion on the focal plane are described. The measurement equipment includes an instrumented inertial-mass shaker capable of operation down to 4 Hz along with seismic accelerometers. The measurement of vibration at frequencies below 10 Hz with good signal-to-noise ratio (SNR) requires several noise reduction techniques including high-performance windows, noise-averaging, tracking filters, and spectral estimation. These signal-processing techniques are described in detail.

  18. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  19. Responsiveness and MCID Estimates for CAT, CCQ, and HADS in Patients With COPD Undergoing Pulmonary Rehabilitation: A Prospective Analysis.

    PubMed

    Smid, Dionne E; Franssen, Frits M E; Houben-Wilke, Sarah; Vanfleteren, Lowie E G W; Janssen, Daisy J A; Wouters, Emiel F M; Spruit, Martijn A

    2017-01-01

    Pulmonary rehabilitation enhances health status and mood status in patients with chronic obstructive pulmonary disease (COPD). The aim was to determine the responsiveness of St. George's Respiratory Questionnaire (SGRQ), COPD Assessment Test (CAT), COPD Clinical Questionnaire (CCQ), and Hospital Anxiety and Depression Scale (HADS) to pulmonary rehabilitation in patients with COPD, and estimate minimum clinically important differences (MCIDs) for CAT, CCQ, and HADS. A prospective analysis. MCIDs were estimated with anchor-based (anchor: SGRQ) and distribution-based methods. Newly estimated MCIDs were compared to known MCID estimates from a systematic literature search. Newly estimated MCIDs were calculated in patients treated in pulmonary rehabilitation. A subsample of 419 individuals with COPD (55.4% male, mean age 64.3 ± 8.8 years) were included from the Chance study. Health status was measured with SGRQ, CAT, and CCQ, before and after pulmonary rehabilitation. Mood status was assessed using HADS. 419 patients with COPD (forced expiratory volume in the first second 37.3% ± 12.1% predicted) completed pulmonary rehabilitation. SGRQ (-9.1 ± 14.0 points), CAT (-3.0 ± 6.8 points), CCQ (-0.6 ± 0.9 points), HADS-Anxiety (-1.7 ± 3.7 points), and HADS-Depression (-2.1 ± 3.7 points) improved significantly. New MCIDs were estimated for CAT (range: -3.8 to -1.0 points), CCQ (range: -0.8 to -0.2 points), HADS-Anxiety (range: -2.0 to -1.1 points), and HADS-Depression (range: -1.8 to -1.4 points). The SGRQ, CAT, CCQ, and HADS are responsive to pulmonary rehabilitation in patients with COPD. We propose MCID estimates ranging between -3.0 and -2.0 points for CAT; -0.5 and -0.3 points for CCQ, -1.8 and -1.3 points for HADS-Anxiety, and -1.7 and -1.5 points for HADS-Depression. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  20. An estimation of the number of cells in the human body.

    PubMed

    Bianconi, Eva; Piovesan, Allison; Facchin, Federica; Beraudi, Alina; Casadei, Raffaella; Frabetti, Flavia; Vitale, Lorenza; Pelleri, Maria Chiara; Tassani, Simone; Piva, Francesco; Perez-Amodio, Soledad; Strippoli, Pierluigi; Canaider, Silvia

    2013-01-01

    All living organisms are made of individual and identifiable cells, whose number, together with their size and type, ultimately defines the structure and functions of an organism. While the total cell number of lower organisms is often known, it has not yet been defined in higher organisms. In particular, the reported total cell number of a human being ranges between 10(12) and 10(16) and it is widely mentioned without a proper reference. To study and discuss the theoretical issue of the total number of cells that compose the standard human adult organism. A systematic calculation of the total cell number of the whole human body and of the single organs was carried out using bibliographical and/or mathematical approaches. A current estimation of human total cell number calculated for a variety of organs and cell types is presented. These partial data correspond to a total number of 3.72 × 10(13). Knowing the total cell number of the human body as well as of individual organs is important from a cultural, biological, medical and comparative modelling point of view. The presented cell count could be a starting point for a common effort to complete the total calculation.

  1. Characterizing subcritical assemblies with time of flight fixed by energy estimation distributions

    NASA Astrophysics Data System (ADS)

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara

    2018-04-01

    We present the Time of Flight Fixed by Energy Estimation (TOFFEE) as a measure of the fission chain dynamics in subcritical assemblies. TOFFEE is the time between correlated gamma rays and neutrons, subtracted by the estimated travel time of the incident neutron from its proton recoil. The measured subcritical assembly was the BeRP ball, a 4.482 kg sphere of α-phase weapons grade plutonium metal, which came in five configurations: bare, 0.5, 1, and 1.5 in iron, and 1 in nickel closed fitting shell reflectors. We extend the measurement with MCNPX-PoliMi simulations of shells ranging up to 6 inches in thickness, and two new reflector materials: aluminum and tungsten. We also simulated the BeRP ball with different masses ranging from 1 to 8 kg. A two-region and single-region point kinetics models were used to model the behavior of the positive side of the TOFFEE distribution from 0 to 100 ns. The single region model of the bare cases gave positive linear correlations between estimated and expected neutron decay constants and leakage multiplications. The two-region model provided a way to estimate neutron multiplication for the reflected cases, which correlated positively with expected multiplication, but the nature of the correlation (sub or superlinear) changed between material types. Finally, we found that the areal density of the reflector shells had a linear correlation with the integral of the two-region model fit. Therefore, we expect that with knowledge of reflector composition, one could determine the shell thickness, or vice versa. Furthermore, up to a certain amount and thickness of the reflector, the two-region model provides a way of distinguishing bare and reflected plutonium assemblies.

  2. Loop transfer recovery for general nonminimum phase discrete time systems. I - Analysis

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Sannuti, Peddapullaiah; Shamash, Yacov

    1992-01-01

    A complete analysis of loop transfer recovery (LTR) for general nonstrictly proper, not necessarily minimum phase discrete time systems is presented. Three different observer-based controllers, namely, `prediction estimator' and full or reduced-order type `current estimator' based controllers, are used. The analysis corresponding to all these three controllers is unified into a single mathematical framework. The LTR analysis given here focuses on three fundamental issues: (1) the recoverability of a target loop when it is arbitrarily given, (2) the recoverability of a target loop while taking into account its specific characteristics, and (3) the establishment of necessary and sufficient conditions on the given system so that it has at least one recoverable target loop transfer function or sensitivity function. Various differences that arise in LTR analysis of continuous and discrete systems are pointed out.

  3. Multivariate survivorship analysis using two cross-sectional samples.

    PubMed

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  4. A spatially explicit capture-recapture estimator for single-catch traps.

    PubMed

    Distiller, Greg; Borchers, David L

    2015-11-01

    Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.

  5. Unbinding slave spins in the Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Guerci, Daniele; Fabrizio, Michele

    2017-11-01

    We show that a generic single-orbital Anderson impurity model, lacking, for instance, any kind of particle-hole symmetry, can be exactly mapped without any constraint onto a resonant level model coupled to two Ising variables, which reduce to one if the hybridization is particle-hole symmetric. The mean-field solution of this model is found to be stable to unphysical spontaneous magnetization of the impurity, unlike the saddle-point solution in the standard slave-boson representation. Remarkably, the mean-field estimate of the Wilson ratio approaches the exact value RW=2 in the Kondo regime.

  6. Comparison of dew point temperature estimation methods in Southwestern Georgia

    Treesearch

    Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd

    2015-01-01

    Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...

  7. Estimation of global snow cover using passive microwave data

    NASA Astrophysics Data System (ADS)

    Chang, Alfred T. C.; Kelly, Richard E.; Foster, James L.; Hall, Dorothy K.

    2003-04-01

    This paper describes an approach to estimate global snow cover using satellite passive microwave data. Snow cover is detected using the high frequency scattering signal from natural microwave radiation, which is observed by passive microwave instruments. Developed for the retrieval of global snow depth and snow water equivalent using Advanced Microwave Scanning Radiometer EOS (AMSR-E), the algorithm uses passive microwave radiation along with a microwave emission model and a snow grain growth model to estimate snow depth. The microwave emission model is based on the Dense Media Radiative Transfer (DMRT) model that uses the quasi-crystalline approach and sticky particle theory to predict the brightness temperature from a single layered snowpack. The grain growth model is a generic single layer model based on an empirical approach to predict snow grain size evolution with time. Gridding to the 25 km EASE-grid projection, a daily record of Special Sensor Microwave Imager (SSM/I) snow depth estimates was generated for December 2000 to March 2001. The estimates are tested using ground measurements from two continental-scale river catchments (Nelson River and the Ob River in Russia). This regional-scale testing of the algorithm shows that for passive microwave estimates, the average daily snow depth retrieval standard error between estimated and measured snow depths ranges from 0 cm to 40 cm of point observations. Bias characteristics are different for each basin. A fraction of the error is related to uncertainties about the grain growth initialization states and uncertainties about grain size changes through the winter season that directly affect the parameterization of the snow depth estimation in the DMRT model. Also, the algorithm does not include a correction for forest cover and this effect is clearly observed in the retrieval. Finally, error is also related to scale differences between in situ ground measurements and area-integrated satellite estimates. With AMSR-E data, improvements to snow depth and water equivalent estimates are expected since AMSR-E will have twice the spatial resolution of the SSM/I and will be able to characterize better the subnivean snow environment from an expanded range of microwave frequencies.

  8. Prediction of Therapy Tumor-Absorbed Dose Estimates in I-131 Radioimmunotherapy Using Tracer Data Via a Mixed-Model Fit to Time Activity

    PubMed Central

    Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.

    2012-01-01

    Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086

  9. Storage flux uncertainty impact on eddy covariance net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Nicolini, Giacomo; Aubinet, Marc; Feigenwinter, Christian; Heinesch, Bernard; Lindroth, Anders; Mamadou, Ossénatou; Moderow, Uta; Mölder, Meelis; Montagnani, Leonardo; Rebmann, Corinna; Papale, Dario

    2017-04-01

    Complying with several assumption and simplifications, most of the carbon budget studies based on eddy covariance (EC) measurements, quantify the net ecosystem exchange (NEE) by summing the flux obtained by EC (Fc) and the storage flux (Sc). Sc is the rate of change of CO2, within the so called control volume below the EC measurement level, given by the difference in the instantaneous profiles of concentration at the beginning and end of the EC averaging period, divided by the averaging period. While cumulating over time led to a nullification of Sc, it can be significant at short time periods. The approaches used to estimate Sc fluxes largely vary, from measurements based only on a single sampling point (usually located at the EC measurement height) to measurements based on several sampling profiles distributed within the control volume. Furthermore, the number of sampling points within each profile vary, according to their height and the ecosystem typology. It follows that measurement accuracy increases with the sampling intensity within the control volume. In this work we use the experimental dataset collected during the ADVEX campaign in which Sc flux has been measured in three similar forest sites by the use of 5 sampling profiles (towers). Our main objective is to quantify the impact of Sc measurement uncertainty on NEE estimates. Results show that different methods may produce substantially different Sc flux estimates, with problematic consequences in case high frequency (half-hourly) data are needed for the analysis. However, the uncertainty on long-term estimates may be tolerate.

  10. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  11. Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies.

    NASA Astrophysics Data System (ADS)

    Ferraz, A.; Painter, T. H.; Saatchi, S.; Bormann, K. J.

    2016-12-01

    Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies The NASA Jet Propulsion Laboratory developed the Airborne Snow Observatory (ASO), a coupled scanning lidar system and imaging spectrometer, to quantify the spatial distribution of snow volume and dynamics over mountains watersheds (Painter et al., 2015). To do this, ASO weekly over-flights mountainous areas during snowfall and snowmelt seasons. In addition, there are additional flights in snow-off conditions to calculate Digital Terrain Models (DTM). In this study, we focus on the reliability of ASO lidar data to characterize the 3D forest vegetation structure. The density of a single point cloud acquisition is of nearly 1 pt/m2, which is not optimal to properly characterize vegetation. However, ASO covers a given study site up to 14 times a year that enables computing a high-resolution point cloud by merging single acquisitions. In this study, we present a method to automatically register ASO multi-temporal lidar 3D point clouds. Although flight specifications do not change between acquisition dates, lidar datasets might have significant planimetric shifts due to inaccuracies in platform trajectory estimation introduced by the GPS system and drifts of the IMU. There are a large number of methodologies that address the problem of 3D data registration (Gressin et al., 2013). Briefly, they look for common primitive features in both datasets such as buildings corners, structures like electric poles, DTM breaklines or deformations. However, they are not suited for our experiment. First, single acquisition point clouds have low density that makes the extraction of primitive features difficult. Second, the landscape significantly changes between flights due to snowfall and snowmelt. Therefore, we developed a method to automatically register point clouds using tree apexes as keypoints because they are features that are supposed to experience little change during winter season. We applied the method to 14 lidar datasets (12 snow-on and 2 snow-off) acquired over the Tuolumne River Basin (California) in the year of 2014. To assess the reliability of the merged point cloud, we analyze the quality of vegetation related products such as canopy height models (CHM) and vertical vegetation profiles.

  12. 47 CFR 68.105 - Minimum point of entry (MPOE) and demarcation point.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... be either the closest practicable point to where the wiring crosses a property line or the closest practicable point to where the wiring enters a multiunit building or buildings. The reasonable and... situations. (c) Single unit installations. For single unit installations existing as of August 13, 1990, and...

  13. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  14. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  15. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  16. Woodland Mapping at Single-Tree Levels Using Object-Oriented Classification of Unmanned Aerial Vehicle (uav) Images

    NASA Astrophysics Data System (ADS)

    Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.

    2017-09-01

    Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.

  17. The low single nucleotide polymorphism heritability of plasma and saliva cortisol levels.

    PubMed

    Neumann, Alexander; Direk, Nese; Crawford, Andrew A; Mirza, Saira; Adams, Hieab; Bolton, Jennifer; Hayward, Caroline; Strachan, David P; Payne, Erin K; Smith, Jennifer A; Milaneschi, Yuri; Penninx, Brenda; Hottenga, Jouke J; de Geus, Eco; Oldehinkel, Albertine J; van der Most, Peter J; de Rijke, Yolanda; Walker, Brian R; Tiemeier, Henning

    2017-11-01

    Cortisol is an important stress hormone affected by a variety of biological and environmental factors, such as the circadian rhythm, exercise and psychological stress. Cortisol is mostly measured using blood or saliva samples. A number of genetic variants have been found to contribute to cortisol levels with these methods. While the effects of several specific single genetic variants is known, the joint genome-wide contribution to cortisol levels is unclear. Our aim was to estimate the amount of cortisol variance explained by common single nucleotide polymorphisms, i.e. the SNP heritability, using a variety of cortisol measures, cohorts and analysis approaches. We analyzed morning plasma (n=5705) and saliva levels (n=1717), as well as diurnal saliva levels (n=1541), in the Rotterdam Study using genomic restricted maximum likelihood estimation. Additionally, linkage disequilibrium score regression was fitted on the results of genome-wide association studies (GWAS) performed by the CORNET consortium on morning plasma cortisol (n=12,597) and saliva cortisol (n=7703). No significant SNP heritability was detected for any cortisol measure, sample or analysis approach. Point estimates ranged from 0% to 9%. Morning plasma cortisol in the CORNET cohorts, the sample with the most power, had a 6% [95%CI: 0-13%] SNP heritability. The results consistently suggest a low SNP heritability of these acute and short-term measures of cortisol. The low SNP heritability may reflect the substantial environmental and, in particular, situational component of these cortisol measures. Future GWAS will require very large sample sizes. Alternatively, more long-term cortisol measures such as hair cortisol samples are needed to discover further genetic pathways regulating cortisol concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  19. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  20. Interpretation of hydraulic conductivity in a fractured-rock aquifer over increasingly larger length dimensions

    USGS Publications Warehouse

    Shapiro, Allen M.; Ladderud, Jeffery; Yager, Richard M.

    2015-01-01

    A comparison of the hydraulic conductivity over increasingly larger volumes of crystalline rock was conducted in the Piedmont physiographic region near Bethesda, Maryland, USA. Fluid-injection tests were conducted on intervals of boreholes isolating closely spaced fractures. Single-hole tests were conducted by pumping in open boreholes for approximately 30 min, and an interference test was conducted by pumping a single borehole over 3 days while monitoring nearby boreholes. An estimate of the hydraulic conductivity of the rock over hundreds of meters was inferred from simulating groundwater inflow into a kilometer-long section of a Washington Metropolitan Area Transit Authority tunnel in the study area, and a groundwater modeling investigation over the Rock Creek watershed provided an estimate of the hydraulic conductivity over kilometers. The majority of groundwater flow is confined to relatively few fractures at a given location. Boreholes installed to depths of approximately 50 m have one or two highly transmissive fractures; the transmissivity of the remaining fractures ranges over five orders of magnitude. Estimates of hydraulic conductivity over increasingly larger rock volumes varied by less than half an order of magnitude. While many investigations point to increasing hydraulic conductivity as a function of the measurement scale, a comparison with selected investigations shows that the effective hydraulic conductivity estimated over larger volumes of rock can either increase, decrease, or remain stable as a function of the measurement scale. Caution needs to be exhibited in characterizing effective hydraulic properties in fractured rock for the purposes of groundwater management.

  1. Predicting the Risk of Developing New Cerebral Lesions After Stereotactic Radiosurgery or Fractionated Stereotactic Radiotherapy for Brain Metastases from Renal Cell Carcinoma.

    PubMed

    Rades, Dirk; Dziggel, Liesa; Blanck, Oliver; Gebauer, Niklas; Bartscht, Tobias; Schild, Steven E

    2018-05-01

    To create an instrument for estimating the risk of new brain metastases after stereotactic radiosurgery (SRS) or fractionated stereotactic radiotherapy (FSRT) alone in patients with renal cell carcinoma (RCC). In 45 patients with 1-3 brain metastases, seven characteristics were analyzed for association with freedom from new brain metastases (age, gender, performance score, number and sites of brain metastases, extra-cerebral metastasis, interval from RCC diagnosis to SRS/FSRT). Lower risk of subsequent brain lesions after RT was associated with single metastasis (p=0.043) and supratentorial involvement only (p=0.018). Scoring points were: One metastasis=1, 2-3 metastases=0, supratentorial alone=1, infratentorial with/without supratentorial=0. Scores of 0, 1 and 2 points were associated with 6-month rates of freedom from subsequent brain lesions of 25%, 74% and 92% (p=0.008). After combining groups with 1 and 2 points, 6-month rates were 25% for those with 0 points and 83% for those with 1-2 points (p=0.002). Two groups were identified with different risks of new brain metastases after SRS or FSRT alone. High-risk patients may benefit from additional whole-brain irradiation. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  2. Scanning the skeleton of the 4D F-theory landscape

    NASA Astrophysics Data System (ADS)

    Taylor, Washington; Wang, Yi-Nan

    2018-01-01

    Using a one-way Monte Carlo algorithm from several different starting points, we get an approximation to the distribution of toric threefold bases that can be used in four-dimensional F-theory compactification. We separate the threefold bases into "resolvable" ones where the Weierstrass polynomials ( f, g) can vanish to order (4 , 6) or higher on codimension-two loci and the "good" bases where these (4 , 6) loci are not allowed. A simple estimate suggests that the number of distinct resolvable base geometries exceeds 103000, with over 10250 "good" bases, though the actual numbers are likely much larger. We find that the good bases are concentrated at specific "end points" with special isolated values of h 1,1 that are bigger than 1,000. These end point bases give Calabi-Yau fourfolds with specific Hodge numbers mirror to elliptic fibrations over simple threefolds. The non-Higgsable gauge groups on the end point bases are almost entirely made of products of E 8, F 4, G 2 and SU(2). Nonetheless, we find a large class of good bases with a single non-Higgsable SU(3). Moreover, by randomly contracting the end point bases, we find many resolvable bases with h 1,1( B) ˜ 50-200 that cannot be contracted to another smooth threefold base.

  3. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    PubMed

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  4. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    PubMed Central

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A.; Ravankar, Abhijeet

    2018-01-01

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision. PMID:29690624

  5. Estimating the Triple-Point Isotope Effect and the Corresponding Uncertainties for Cryogenic Fixed Points

    NASA Astrophysics Data System (ADS)

    Tew, W. L.

    2008-02-01

    The sensitivities of melting temperatures to isotopic variations in monatomic and diatomic atmospheric gases using both theoretical and semi-empirical methods are estimated. The current state of knowledge of the vapor-pressure isotope effects (VPIE) and triple-point isotope effects (TPIE) is briefly summarized for the noble gases (except He), and for selected diatomic molecules including oxygen. An approximate expression is derived to estimate the relative shift in the melting temperature with isotopic substitution. In general, the magnitude of the effects diminishes with increasing molecular mass and increasing temperature. Knowledge of the VPIE, molar volumes, and heat of fusion are sufficient to estimate the temperature shift or isotopic sensitivity coefficient via the derived expression. The usefulness of this approach is demonstrated in the estimation of isotopic sensitivities and uncertainties for triple points of xenon and molecular oxygen for which few documented estimates were previously available. The calculated sensitivities from this study are considerably higher than previous estimates for Xe, and lower than other estimates in the case of oxygen. In both these cases, the predicted sensitivities are small and the resulting variations in triple point temperatures due to mass fractionation effects are less than 20 μK.

  6. Estimating the physicochemical properties of polyhalogenated aromatic and aliphatic compounds using UPPER: part 1. Boiling point and melting point.

    PubMed

    Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H

    2015-01-01

    The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. A modified split Hopkinson pressure bar for toughness tests

    NASA Astrophysics Data System (ADS)

    Granier, N.; Grunenwald, T.

    2006-08-01

    In order to characterize material toughness or to study crack arrest under dynamic loading conditions, a new testing device has been developed at CEA/Valduc. A new Split Hopkinson Pressure Bar (SHPB) has been modified: it is now composed of a single incident bar and a double transmitter bar. With this facility, a notched specimen can be loaded under three points bending conditions. Qualification tests with titanium and steel notched samples are presented. Data treatment software has been adapted to estimate the sample deflection as a function of time and treat the energy balance. These results are compared with classical Charpy experiments. Effect of various contact areas between specimen and bars are studied to point out their influence on obtained measurements. The advantage of a “knife” contact compared to a plane one is then clearly demonstrated. All results obtained with this new testing device are in good agreement and show a reduced scattering.

  8. Configuration study for a 30 GHz monolithic receive array, volume 1

    NASA Technical Reports Server (NTRS)

    Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.

    1984-01-01

    Gregorian, Cassegrain, and single reflector systems were analyzed in configuration studies for communications satellite receive antennas. Parametric design and performance curves were generated. A preliminary design of each reflector/feed system was derived including radiating elements, beam-former network, beamsteering system, and MMIC module architecture. Performance estimates and component requirements were developed for each design. A recommended design was selected for both the scanning beam and the fixed beam case. Detailed design and performance analysis results are presented for the selected Cassegrain configurations. The final design point is characterized in detail and performance measures evaluated in terms of gain, sidelobe level, noise figure, carrier-to-interference ratio, prime power, and beamsteering. The effects of mutual coupling and excitation errors (including phase and amplitude quantization errors) are evaluated. Mechanical assembly drawings are given for the final design point. Thermal design requirements are addressed in the mechanical design.

  9. Cluster Analysis and Gaussian Mixture Estimation of Correlated Time-Series by Means of Multi-dimensional Scaling

    NASA Astrophysics Data System (ADS)

    Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

    We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

  10. Point Counts of Birds: What Are We Estimating?

    Treesearch

    Douglas H. Johnson

    1995-01-01

    Point counts of birds are made for many reasons, including estimating local densities, determining population trends, assessing habitat preferences, and exploiting the activities of recreational birdwatchers. Problems arise unless there is a clear understanding of what point counts mean in terms of actual populations of birds. Criteria for conducting point counts...

  11. Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.

    2017-11-01

    Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.

  12. The Effect of Capital Gains Taxation on Home Sales: Evidence from the Taxpayer Relief Act of 1997

    PubMed Central

    Shan, Hui

    2010-01-01

    The Taxpayer Relief Act of 1997 (TRA97) significantly changed the tax treatment of housing capital gains in the United States. Before 1997, homeowners were subject to capital gains taxation when they sold their houses unless they purchased replacement homes of equal or greater value. Since 1997, homeowners can exclude capital gains of $500,000 (or $250,000 for single filers) when they sell their houses. Such dramatic changes provide a good opportunity to study the lock-in effect of capital gains taxation on home sales. Using 1982–2008 transaction data on single-family houses in 16 affluent towns within the Boston metropolitan area, I find that TRA97 reversed the lock-in effect of capital gains taxes on houses with low and moderate capital gains. Specifically, the semiannual sales rate of houses with positive gains up to $500,000 increased by 0.40–0.62 percentage points after TRA97, representing a 19–24 percent increase from the pre-TRA97 baseline sales rate. In contrast, I do not find TRA97 to have a significant effect on houses with gains above $500,000. Moreover, the short-term effect of TRA97 is much larger than the long-term effect, suggesting that many previously locked-in homeowners took advantage of the exclusions immediately after TRA97. In addition, I exploit the 2001 and 2003 legislative changes in the capital gains tax rate to estimate the tax elasticity of home sales during the post-TRA97 period. The estimation results suggest that a $10,000 increase in capital gains taxes reduces the semiannual home sales rate by about 0.1–0.2 percentage points, or 6–13 percent from the post-TRA97 average sales rate. PMID:21170145

  13. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  14. An adhered-particle analysis system based on concave points

    NASA Astrophysics Data System (ADS)

    Wang, Wencheng; Guan, Fengnian; Feng, Lin

    2018-04-01

    Particles adhered together will influence the image analysis in computer vision system. In this paper, a method based on concave point is designed. First, corner detection algorithm is adopted to obtain a rough estimation of potential concave points after image segmentation. Then, it computes the area ratio of the candidates to accurately localize the final separation points. Finally, it uses the separation points of each particle and the neighboring pixels to estimate the original particles before adhesion and provides estimated profile images. The experimental results have shown that this approach can provide good results that match the human visual cognitive mechanism.

  15. Marital status and colon cancer outcomes in US Surveillance, Epidemiology and End Results registries: does marriage affect cancer survival by gender and stage?

    PubMed

    Wang, Li; Wilson, Sven E; Stewart, David B; Hollenbeak, Christopher S

    2011-10-01

    Marital status has been associated with outcomes in several cancer sites including breast cancer in the literature, but little is known about colon cancer, the fourth most common cancer in the US. A total of 127,753 patients with colon cancer were identified who were diagnosed between 1992 and 2006 in the US Surveillance, Epidemiology and End Results (SEER) Program. Marital status consisted of married, single, separated/divorced and widowed. Chi-square tests were used to examine the association between marital status and other variables. The Kaplan-Meier method was used to estimate survival curves. Cox proportional hazards models were fit to estimate the effect of marital status on survival. Married patients were more likely to be diagnosed at an earlier stage (and for men also at an older age) compared with single and separated/divorced patients, and more likely to receive surgical treatment than all other marital groups (all p<0.0001). The five-year survival rate for the single was six percentage points lower than the married for both men and women. After controlling for age, race, cancer stage and surgery receipt, married patients had a significantly lower risk of death from cancer (for men, HR: 0.86, CI: 0.82-0.90; for women, HR: 0.87, CI: 0.83-0.91) compared with the single. Within the same cancer stage, the survival differences between the single and the married were strongest for localized and regional stages, which had overall middle-range survival rates compared to in situ or distant stage so that support from marriage could make a big difference. Marriage was associated with better outcomes of colon cancer for both men and women, and being single was associated with lower survival rate from colon cancer. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Comparative MR study of hepatic fat quantification using single-voxel proton spectroscopy, two-point dixon and three-point IDEAL.

    PubMed

    Kim, Hyeonjin; Taksali, Sara E; Dufour, Sylvie; Befroy, Douglas; Goodman, T Robin; Petersen, Kitt Falk; Shulman, Gerald I; Caprio, Sonia; Constable, R Todd

    2008-03-01

    Hepatic fat fraction (HFF) was measured in 28 lean/obese humans by single-voxel proton spectroscopy (MRS), a two-point Dixon (2PD), and a three-point iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) method (3PI). For the lean, obese, and total subject groups, the range of HFF measured by MRS was 0.3-3.5% (1.1 +/- 1.4%), 0.3-41.5% (11.7 +/- 12.1), and 0.3-41.5% (10.1 +/- 11.6%), respectively. For the same groups, the HFF measured by 2PD was -6.3-2.2% (-2.0 +/- 3.7%), -2.4-42.9% (12.9 +/- 13.8%), and -6.3-42.9% (10.5 +/- 13.7%), respectively, and for 3PI they were 7.9-12.8% (10.1 +/- 2.0%), 11.1-49.3% (22.0 +/- 12.2%), and 7.9-49.3% (20.0 +/- 11.8%), respectively. The HFF measured by MRS was highly correlated with those measured by 2PD (r = 0.954, P < 0.001) and 3PI (r = 0.973, P < 0.001). With the MRS data as a reference, the percentages of correct differentiation between normal and fatty liver with the MRI methods ranged from 68-93% for 2PD and 64-89% for 3PI. Our study demonstrates that the apparent HFF measured by the MRI methods can significantly vary depending on the choice of water-fat separation methods and sequences. Such variability may limit the clinical application of the MRI methods, particularly when a diagnosis of early fatty liver needs to be performed. Therefore, protocol-specific establishment of cutoffs for liver fat content may be necessary. (c) 2008 Wiley-Liss, Inc.

  17. Effects of Ayurvedic Oil-Dripping Treatment with Sesame Oil vs. with Warm Water on Sleep: A Randomized Single-Blinded Crossover Pilot Study.

    PubMed

    Tokinobu, Akiko; Yorifuji, Takashi; Tsuda, Toshihide; Doi, Hiroyuki

    2016-01-01

    Ayurvedic oil-dripping treatment (Shirodhara) is often used for treating sleep problems. However, few properly designed studies have been conducted, and the quantitative effect of Shirodhara is unclear. This study sought to quantitatively evaluate the effect of sesame oil Shirodhara (SOS) against warm water Shirodhara (WWS) on improving sleep quality and quality of life (QOL) among persons reporting sleep problems. This randomized, single-blinded, crossover study recruited 20 participants. Each participant received seven 30-minute sessions within 2 weeks with either liquid. The washout period was at least 2 months. The Shirodhara procedure was conducted by a robotic oil-drip system. The outcomes were assessed by the Pittsburgh Sleep Quality Index (PSQI) for sleep quality, Epworth Sleepiness Scale (ESS) for daytime sleepiness, World Health Organization Quality of Life 26 (WHO-QOL26) for QOL, and a sleep monitor instrument for objective sleep measures. Changes between baseline and follow-up periods were compared between the two types of Shirodhara. Analysis was performed with generalized estimating equations. Of 20 participants, 15 completed the study. SOS improved sleep quality, as measured by PSQI. The SOS score was 1.83 points lower (95% confidence interval [CI], -3.37 to -0.30) at 2-week follow-up and 1.73 points lower (95% CI, -3.84 to 0.38) than WWS at 6-week follow-up. Although marginally significant, SOS also improved QOL by 0.22 points at 2-week follow-up and 0.19 points at 6-week follow-up compared with WWS. After SOS, no beneficial effects were observed on daytime sleepiness or objective sleep measures. This pilot study demonstrated that SOS may be a safe potential treatment to improve sleep quality and QOL in persons with sleep problems.

  18. The relevance of time series in molecular ecology and conservation biology.

    PubMed

    Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E

    2014-05-01

    The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.

  19. Silicon displacement threshold energy determined by electron paramagnetic resonance and positron annihilation spectroscopy in cubic and hexagonal polytypes of silicon carbide

    NASA Astrophysics Data System (ADS)

    Kerbiriou, X.; Barthe, M.-F.; Esnouf, S.; Desgardin, P.; Blondiaux, G.; Petite, G.

    2007-05-01

    Both for electronic and nuclear applications, it is of major interest to understand the properties of point defects into silicon carbide (SiC). Low energy electron irradiations are supposed to create primary defects into materials. SiC single crystals have been irradiated with electrons at two beam energies in order to investigate the silicon displacement threshold energy into SiC. This paper presents the characterization of the electron irradiation-induced point defects into both polytypes hexagonal (6H) and cubic (3C) SiC single crystals by using both positron annihilation spectroscopy (PAS) and electron paramagnetic resonance (EPR). The nature and the concentration of the generated point defects depend on the energy of the electron beam and the polytype. After an electron irradiation at an energy of 800 keV vSi mono-vacancies and vSi-vC di-vacancies are detected in both 3C and 6H-SiC polytypes. On the contrary, the nature of point defects detected after an electron irradiation at 190 keV strongly depends on the polytype. Into 6H-SiC crystals, silicon Frenkel pairs vSi-Si are detected whereas only carbon vacancy related defects are detected into 3C-SiC crystals. The difference observed in the distribution of defects detected into the two polytypes can be explained by the different values of the silicon displacement threshold energies for 3C and 6H-SiC. By comparing the calculated theoretical numbers of displaced atoms with the defects numbers measured using EPR, the silicon displacement threshold energy has been estimated to be slightly lower than 20 eV in the 6H polytype and close to 25 eV in the 3C polytype.

  20. Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme

    NASA Astrophysics Data System (ADS)

    Hsin, Cheng-Ho; Inigo, Rafael M.

    1990-03-01

    The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.

  1. Estimating average annual per cent change in trend analysis

    PubMed Central

    Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K

    2009-01-01

    Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324

  2. Effect of Chlorine Substitution on Sulfide Reactivity with OH Radicals

    DTIC Science & Technology

    2008-09-01

    Single point energy: MP2/6-311+G(3df,2p) (LRG) • Zero Point Energy from a vibrational frequency analysis: MP2/6-31++G** ( ZPE ) • Extrapolated energy...E(QCI) + E(LARG) – E(SML) + ZPE • Characterize the TS • Use a three-point fit methodology – fit a harmonic potential to three CCSD single point

  3. 49 CFR 172.315 - Packages containing limited quantities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... applicable, for the entry as shown in the § 172.101 Table, and placed within a square-on-point border in... to the package as to be readily visible. The width of line forming the square-on-point must be at... square-on-points bearing a single ID number, or a single square-on-point large enough to include each...

  4. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  5. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-04-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  6. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  7. THE DISTRIBUTION OF COOK’S D STATISTIC

    PubMed Central

    Muller, Keith E.; Mok, Mario Chen

    2013-01-01

    Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487

  8. Single- and Dual-Process Models of Biased Contingency Detection.

    PubMed

    Vadillo, Miguel A; Blanco, Fernando; Yarritu, Ion; Matute, Helena

    2016-01-01

    Decades of research in causal and contingency learning show that people's estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artifacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive.

  9. In Vitro Evaluation and Mechanism Analysis of the Fiber Shedding Property of Textile Pile Debridement Materials

    PubMed Central

    Fu, Yijun; Xie, Qixue; Lao, Jihong; Wang, Lu

    2016-01-01

    Fiber shedding is a critical problem in biomedical textile debridement materials, which leads to infection and impairs wound healing. In this work, single fiber pull-out test was proposed as an in vitro evaluation for the fiber shedding property of a textile pile debridement material. Samples with different structural design (pile densities, numbers of ground yarns and coating times) were prepared and estimated under this testing method. Results show that single fiber pull-out test offers an appropriate in vitro evaluation for the fiber shedding property of textile pile debridement materials. Pull-out force for samples without back-coating exhibited a slight escalating trend with the supplement in pile density and number of ground yarn plies, while back-coating process significantly raised the single fiber pull-out force. For fiber shedding mechanism analysis, typical pull-out behavior and failure modes of the single fiber pull-out test were analyzed in detail. Three failure modes were found in this study, i.e., fiber slippage, coating point rupture and fiber breakage. In summary, to obtain samples with desirable fiber shedding property, fabric structural design, preparation process and raw materials selection should be taken into full consideration. PMID:28773428

  10. A D-Estimator for Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Hedges, Larry; Pustejovsky, James; Rindskopf, David

    2012-01-01

    Over the last 10 years, numerous authors have proposed effect size estimators for single-case designs. None, however, has been shown to be equivalent to the usual between-groups standardized mean difference statistic, sometimes called d. The present paper remedies that omission. Most effect size estimators for single-case designs use the…

  11. Real-time volcano monitoring using GNSS single-frequency receivers

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.

    2015-12-01

    We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.

  12. Combined EEG/MEG Can Outperform Single Modality EEG or MEG Source Reconstruction in Presurgical Epilepsy Diagnosis

    PubMed Central

    Aydin, Ümit; Vorwerk, Johannes; Dümpelmann, Matthias; Küpper, Philipp; Kugel, Harald; Heers, Marcel; Wellmer, Jörg; Kellinghaus, Christoph; Haueisen, Jens; Rampp, Stefan; Stefan, Hermann; Wolters, Carsten H.

    2015-01-01

    We investigated two important means for improving source reconstruction in presurgical epilepsy diagnosis. The first investigation is about the optimal choice of the number of epileptic spikes in averaging to (1) sufficiently reduce the noise bias for an accurate determination of the center of gravity of the epileptic activity and (2) still get an estimation of the extent of the irritative zone. The second study focuses on the differences in single modality EEG (80-electrodes) or MEG (275-gradiometers) and especially on the benefits of combined EEG/MEG (EMEG) source analysis. Both investigations were validated with simultaneous stereo-EEG (sEEG) (167-contacts) and low-density EEG (ldEEG) (21-electrodes). To account for the different sensitivity profiles of EEG and MEG, we constructed a six-compartment finite element head model with anisotropic white matter conductivity, and calibrated the skull conductivity via somatosensory evoked responses. Our results show that, unlike single modality EEG or MEG, combined EMEG uses the complementary information of both modalities and thereby allows accurate source reconstructions also at early instants in time (epileptic spike onset), i.e., time points with low SNR, which are not yet subject to propagation and thus supposed to be closer to the origin of the epileptic activity. EMEG is furthermore able to reveal the propagation pathway at later time points in agreement with sEEG, while EEG or MEG alone reconstructed only parts of it. Subaveraging provides important and accurate information about both the center of gravity and the extent of the epileptogenic tissue that neither single nor grand-averaged spike localizations can supply. PMID:25761059

  13. Determining Geometric Parameters of Agricultural Trees from Laser Scanning Data Obtained with Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Hadas, E.; Jozkow, G.; Walicka, A.; Borkowski, A.

    2018-05-01

    The estimation of dendrometric parameters has become an important issue for agriculture planning and for the efficient management of orchards. Airborne Laser Scanning (ALS) data is widely used in forestry and many algorithms for automatic estimation of dendrometric parameters of individual forest trees were developed. Unfortunately, due to significant differences between forest and fruit trees, some contradictions exist against adopting the achievements of forestry science to agricultural studies indiscriminately. In this study we present the methodology to identify individual trees in apple orchard and estimate heights of individual trees, using high-density LiDAR data (3200 points/m2) obtained with Unmanned Aerial Vehicle (UAV) equipped with Velodyne HDL32-E sensor. The processing strategy combines the alpha-shape algorithm, principal component analysis (PCA) and detection of local minima. The alpha-shape algorithm is used to separate tree rows. In order to separate trees in a single row, we detect local minima on the canopy profile and slice polygons from alpha-shape results. We successfully separated 92 % of trees in the test area. 6 % of trees in orchard were not separated from each other and 2 % were sliced into two polygons. The RMSE of tree heights determined from the point clouds compared to field measurements was equal to 0.09 m, and the correlation coefficient was equal to 0.96. The results confirm the usefulness of LiDAR data from UAV platform in orchard inventory.

  14. A First Estimate of the X-Ray Binary Frequency as a Function of Star Cluster Mass in a Single Galactic System

    NASA Astrophysics Data System (ADS)

    Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.

    2008-05-01

    We use the previously identified 15 infrared star cluster counterparts to X-ray point sources in the interacting galaxies NGC 4038/4039 (the Antennae) to study the relationship between total cluster mass and X-ray binary number. This significant population of X-Ray/IR associations allows us to perform, for the first time, a statistical study of X-ray point sources and their environments. We define a quantity, η, relating the fraction of X-ray sources per unit mass as a function of cluster mass in the Antennae. We compute cluster mass by fitting spectral evolutionary models to Ks luminosity. Considering that this method depends on cluster age, we use four different age distributions to explore the effects of cluster age on the value of η and find it varies by less than a factor of 4. We find a mean value of η for these different distributions of η = 1.7 × 10-8 M-1⊙ with ση = 1.2 × 10-8 M-1⊙. Performing a χ2 test, we demonstrate η could exhibit a positive slope, but that it depends on the assumed distribution in cluster ages. While the estimated uncertainties in η are factors of a few, we believe this is the first estimate made of this quantity to "order of magnitude" accuracy. We also compare our findings to theoretical models of open and globular cluster evolution, incorporating the X-ray binary fraction per cluster.

  15. A Novel Objective Method of Estimating the Age of Mandibles from African Elephants (Loxodonta africana Africana)

    PubMed Central

    Stansfield, Fiona J.

    2015-01-01

    The importance of assigning an accurate estimate of age and sex to elephant carcasses found in the wild has increased in recent years with the escalation in levels of poaching throughout Africa. Irregularities identified in current ageing techniques prompted the development of a new method to describe molar progression throughout life. Elephant mandibles (n = 323) were studied and a point near the distal dental alveolus was identified as being most useful in ranking each jaw according to molar progression. These ‘Age Reference Lines’ were then associated with an age scale based on previous studies and Zimbabwean mandibles of known age. The new ranking produced a single age scale that proved useful for both male and female mandibles up to the maximum lifespan age of 70–75 years. Methods to aid in molar identification and the sexing of found jaws were also identified. PMID:25970428

  16. NGSLR's Measurement of the Retro-Reflector Array Response of Various LEO to GNSS Satellites

    NASA Technical Reports Server (NTRS)

    McGarry, Jan; Clarke, Christopher; Degnan, John; Donovan, Howard; Hall, Benjamin; Hovarth, Julie; Zagwodzki, Thomas

    2012-01-01

    "NASA's Next Generation Satellite Laser Ranging System (NGSLR) has successfully demonstrated daylight and nighttime tracking this year to s atellites from LEO to GNSS orbits, using a 7-8 arcsecond beam divergence, a 43% QE Hamamatsu MCP-PMT with single photon detection, a narrow field of view (11 arcseconds), and a 1 mJ per pulse 2kHz repetition rate laser. We have compared the actual return rates we are getting against the theoretical link calculations, using the known system confi guration parameters, an estimate of the sky transmission using locall y measured visibility, and signal processing to extract the signal from the background noise. We can achieve good agreement between theory and measurement in most passes by using an estimated pOinting error. We will s.()w the results of this comparison along with our conclusio ns."

  17. An elevated neutrophil-lymphocyte ratio is associated with adverse outcomes following single time-point paracetamol (acetaminophen) overdose: a time-course analysis.

    PubMed

    Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J

    2014-09-01

    The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.

  18. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  19. How Affiliation Disclosure and Control Over User-Generated Comments Affects Consumer Health Knowledge and Behavior: A Randomized Controlled Experiment of Pharmaceutical Direct-to-Consumer Advertising on Social Media.

    PubMed

    DeAndrea, David Christopher; Vendemia, Megan Ashley

    2016-07-19

    More people are seeking health information online than ever before and pharmaceutical companies are increasingly marketing their drugs through social media. The aim was to examine two major concerns related to online direct-to-consumer pharmaceutical advertising: (1) how disclosing an affiliation with a pharmaceutical company affects how people respond to drug information produced by both health organizations and online commenters, and (2) how knowledge that health organizations control the display of user-generated comments affects consumer health knowledge and behavior. We conducted a 2×2×2 between-subjects experiment (N=674). All participants viewed an infographic posted to Facebook by a health organization about a prescription allergy drug. Across conditions, the infographic varied in the degree to which the health organization and commenters appeared to be affiliated with a drug manufacturer, and the display of user-generated comments appeared to be controlled. Affiliation disclosure statements on a health organization's Facebook post increased perceptions of an organization-drug manufacturer connection, which reduced trust in the organization (point estimate -0.45, 95% CI -0.69 to -0.24) and other users who posted comments about the drug (point estimate -0.44, 95% CI -0.68 to -0.22). Furthermore, increased perceptions of an organization-manufacturer connection reduced the likelihood that people would recommend the drug to important others (point estimate -0.35, 95% CI -0.59 to -0.15), and share the drug post with others on Facebook (point estimate -0.37, 95% CI -0.64 to -0.16). An affiliation cue next to the commenters' names increased perceptions that the commenters were affiliated with the drug manufacturer, which reduced trust in the comments (point estimate -0.81, 95% CI -1.04 to -0.59), the organization that made the post (point estimate -0.68, 95% CI -0.90 to -0.49), the likelihood of participants recommending the drug (point estimate -0.61, 95% CI -0.82 to -0.43), and sharing the post with others on Facebook (point estimate -0.63, 95% CI -0.87 to -0.43). Cues indicating that a health organization removed user-generated comments from a post increased perceptions that the drug manufacturer influenced the display of the comments, which negatively affected trust in the comments (point estimate -0.35, 95% CI -0.53 to -0.20), the organization (point estimate -0.31, 95% CI -0.47 to -0.17), the likelihood of recommending the drug (point estimate -0.26, 95% CI -0.41 to -0.14), and the likelihood of sharing the post with others on Facebook (point estimate -0.28, 95% CI -0.45 to -0.15). (All estimates are unstandardized indirect effects and 95% bias-corrected bootstrap confidence intervals.) Concern over pharmaceutical companies hiding their affiliations and strategically controlling user-generated comments is well founded; these practices can greatly affect not only how viewers evaluate drug information online, but also how likely they are to propagate the information throughout their online and offline social networks.

  20. Not simply more of the same: distinguishing between patient heterogeneity and parameter uncertainty.

    PubMed

    Vemer, Pepijn; Goossens, Lucas M A; Rutten-van Mölken, Maureen P M H

    2014-11-01

    In cost-effectiveness (CE) Markov models, heterogeneity in the patient population is not automatically taken into account. We aimed to compare methods of dealing with heterogeneity on estimates of CE, using a case study in chronic obstructive pulmonary disease (COPD). We first present a probabilistic sensitivity analysis (PSA) in which we sampled only from distributions representing parameter uncertainty. This ignores any heterogeneity. Next, we explored heterogeneity by presenting results for subgroups, using a method that samples parameter uncertainty simultaneously with heterogeneity in a single-loop PSA. Finally, we distinguished parameter uncertainty from heterogeneity in a double-loop PSA by performing a nested simulation within each PSA iteration. Point estimates and uncertainty differed substantially between methods. The incremental CE ratio (ICER) ranged from € 4900 to € 13,800. The single-loop PSA led to a substantially different shape of the CE plane and an overestimation of the uncertainty compared with the other 3 methods. The CE plane for the double-loop PSA showed substantially less uncertainty and a stronger negative correlation between the difference in costs and the difference in effects compared with the other methods. This came at the cost of higher calculation times. Not accounting for heterogeneity, subgroup analysis and the double-loop PSA can be viable options, depending on the decision makers' information needs. The single-loop PSA should not be used in CE research. It disregards the fundamental differences between heterogeneity and sampling uncertainty and overestimates uncertainty as a result. © The Author(s) 2014.

  1. Effects of LiDAR point density and landscape context on the retrieval of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, K. K.; Chen, G.; McCarter, J. B.; Meentemeyer, R. K.

    2014-12-01

    Light Detection and Ranging (LiDAR), as an alternative to conventional optical remote sensing, is being increasingly used to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and better data accuracies, which however pose challenges to the procurement and processing of LiDAR data for large-area assessments. Reducing point density cuts data acquisition costs and overcome computational challenges for broad-scale forest management. However, how does that impact the accuracy of biomass estimation in an urban environment containing a great level of anthropogenic disturbances? The main goal of this study is to evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing regions of Charlotte, North Carolina, USA. We used multiple linear regression to establish the statistical relationship between field-measured biomass and predictor variables (PVs) derived from LiDAR point clouds with varying densities. We compared the estimation accuracies between the general Urban Forest models (no discrimination of forest type) and the Forest Type models (evergreen, deciduous, and mixed), which was followed by quantifying the degree to which landscape context influenced biomass estimation. The explained biomass variance of Urban Forest models, adjusted R2, was fairly consistent across the reduced point densities with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models using two representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, signifying the distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest biomass assessment without compromising the accuracy of estimation, which may further be improved using development density.

  2. An Approach to Speed up Single-Frequency PPP Convergence with Quad-Constellation GNSS and GIM.

    PubMed

    Cai, Changsheng; Gong, Yangzhao; Gao, Yang; Kuang, Cuilin

    2017-06-06

    The single-frequency precise point positioning (PPP) technique has attracted increasing attention due to its high accuracy and low cost. However, a very long convergence time, normally a few hours, is required in order to achieve a positioning accuracy level of a few centimeters. In this study, an approach is proposed to accelerate the single-frequency PPP convergence by combining quad-constellation global navigation satellite system (GNSS) and global ionospheric map (GIM) data. In this proposed approach, the GPS, GLONASS, BeiDou, and Galileo observations are directly used in an uncombined observation model and as a result the ionospheric and hardware delay (IHD) can be estimated together as a single unknown parameter. The IHD values acquired from the GIM product and the multi-GNSS differential code bias (DCB) product are then utilized as pseudo-observables of the IHD parameter in the observation model. A time varying weight scheme has also been proposed for the pseudo-observables to gradually decrease its contribution to the position solutions during the convergence period. To evaluate the proposed approach, datasets from twelve Multi-GNSS Experiment (MGEX) stations on seven consecutive days are processed and analyzed. The numerical results indicate that the single-frequency PPP with quad-constellation GNSS and GIM data are able to reduce the convergence time by 56%, 47%, 41% in the east, north, and up directions compared to the GPS-only single-frequency PPP.

  3. Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors

    PubMed Central

    Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui

    2012-01-01

    A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583

  4. Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.

    PubMed

    Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui

    2012-01-01

    A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.

  5. Multi-beam and single-chip LIDAR with discrete beam steering by digital micromirror device

    NASA Astrophysics Data System (ADS)

    Rodriguez, Joshua; Smith, Braden; Hellman, Brandon; Gin, Adley; Espinoza, Alonzo; Takashima, Yuzuru

    2018-02-01

    A novel Digital Micromirror Device (DMD) based beam steering enables a single chip Light Detection and Ranging (LIDAR) system for discrete scanning points. We present increasing number of scanning point by using multiple laser diodes for Multi-beam and Single-chip DMD-based LIDAR.

  6. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    PubMed Central

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  7. Developing stochastic epidemiological models to quantify the dynamics of infectious diseases in domestic livestock.

    PubMed

    MacKenzie, K; Bishop, S C

    2001-08-01

    A stochastic model describing disease transmission dynamics for a microparasitic infection in a structured domestic animal population is developed and applied to hypothetical epidemics on a pig farm. Rational decision making regarding appropriate control strategies for infectious diseases in domestic livestock requires an understanding of the disease dynamics and risk profiles for different groups of animals. This is best achieved by means of stochastic epidemic models. Methodologies are presented for 1) estimating the probability of an epidemic, given the presence of an infected animal, whether this epidemic is major (requires intervention) or minor (dies out without intervention), and how the location of the infected animal on the farm influences the epidemic probabilities; 2) estimating the basic reproductive ratio, R0 (i.e., the expected number of secondary cases on the introduction of a single infected animal) and the variability of the estimate of this parameter; and 3) estimating the total proportion of animals infected during an epidemic and the total proportion infected at any point in time. The model can be used for assessing impact of altering farm structure on disease dynamics, as well as disease control strategies, including altering farm structure, vaccination, culling, and genetic selection.

  8. DeepVel: Deep learning for the estimation of horizontal velocities at the solar surface

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Requerey, I. S.; Vitas, N.

    2017-07-01

    Many phenomena taking place in the solar photosphere are controlled by plasma motions. Although the line-of-sight component of the velocity can be estimated using the Doppler effect, we do not have direct spectroscopic access to the components that are perpendicular to the line of sight. These components are typically estimated using methods based on local correlation tracking. We have designed DeepVel, an end-to-end deep neural network that produces an estimation of the velocity at every single pixel, every time step, and at three different heights in the atmosphere from just two consecutive continuum images. We confront DeepVel with local correlation tracking, pointing out that they give very similar results in the time and spatially averaged cases. We use the network to study the evolution in height of the horizontal velocity field in fragmenting granules, supporting the buoyancy-braking mechanism for the formation of integranular lanes in these granules. We also show that DeepVel can capture very small vortices, so that we can potentially expand the scaling cascade of vortices to very small sizes and durations. The movie attached to Fig. 3 is available at http://www.aanda.org

  9. On the behavior of the leading eigenvalue of Eigen's evolutionary matrices.

    PubMed

    Semenov, Yuri S; Bratus, Alexander S; Novozhilov, Artem S

    2014-12-01

    We study general properties of the leading eigenvalue w¯(q) of Eigen's evolutionary matrices depending on the replication fidelity q. This is a linear algebra problem that has various applications in theoretical biology, including such diverse fields as the origin of life, evolution of cancer progression, and virus evolution. We present the exact expressions for w¯(q),w¯(')(q),w¯('')(q) for q = 0, 0.5, 1 and prove that the absolute minimum of w¯(q), which always exists, belongs to the interval (0, 0.5]. For the specific case of a single peaked landscape we also find lower and upper bounds on w¯(q), which are used to estimate the critical mutation rate, after which the distribution of the types of individuals in the population becomes almost uniform. This estimate is used as a starting point to conjecture another estimate, valid for any fitness landscape, and which is checked by numerical calculations. The last estimate stresses the fact that the inverse dependence of the critical mutation rate on the sequence length is not a generally valid fact. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Occupancy Estimation and Modeling : Inferring Patterns and Dynamics of Species Occurrence

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Royle, J. Andrew; Pollock, K.H.; Bailey, L.L.; Hines, J.E.

    2006-01-01

    This is the first book to examine the latest methods in analyzing presence/absence data surveys. Using four classes of models (single-species, single-season; single-species, multiple season; multiple-species, single-season; and multiple-species, multiple-season), the authors discuss the practical sampling situation, present a likelihood-based model enabling direct estimation of the occupancy-related parameters while allowing for imperfect detectability, and make recommendations for designing studies using these models. It provides authoritative insights into the latest in estimation modeling; discusses multiple models which lay the groundwork for future study designs; addresses critical issues of imperfect detectibility and its effects on estimation; and explores the role of probability in estimating in detail.

  11. A Service Life Analysis of Coast Guard C-130 Aircraft

    DTIC Science & Technology

    2003-03-01

    the drawing tool bar. If you feel more comfortable with estimating at points, you could also use AutoShapes to identify your estimate at a particular...you feel more comfortable with estimating at points, you could also use AutoShapes to identify your estimate at a particular year. If you prefer to

  12. The pharmacokinetics of cytarabine administered subcutaneously, combined with prednisone, in dogs with meningoencephalomyelitis of unknown etiology.

    PubMed

    Pastina, B; Early, P J; Bergman, R L; Nettifee, J; Maller, A; Bray, K Y; Waldron, R J; Castel, A M; Munana, K R; Papich, M G; Messenger, K M

    2018-05-15

    The objective of this study was to describe the pharmacokinetics (PK) of cytarabine (CA) after subcutaneous (SC) administration to dogs with meningoencephalomyelitis of unknown etiology (MUE). Twelve dogs received a single SC dose of CA at 50 mg/m 2 as part of treatment of MUE. A sparse sampling technique was used to collect four blood samples from each dog from 0 to 360 min after administration. All dogs were concurrently receiving prednisone (0.5-2 mg kg -1 day -1 ). Plasma CA concentrations were measured by HPLC, and pharmacokinetic parameters were estimated using nonlinear mixed-effects modeling (NLME). Plasma drug concentrations ranged from 0.05 to 2.8 μg/ml. The population estimate (CV%) for elimination half-life and Tmax of cytarabine in dogs was 1.09 (21.93) hr and 0.55 (51.03) hr, respectively. The volume of distribution per fraction absorbed was 976.31 (10.85%) ml/kg. Mean plasma concentration of CA for all dogs was above 1.0 μg/ml at the 30-, 60-, 90-, and 120-min time points. In this study, the pharmacokinetics of CA in dogs with MUE after a single 50 mg/m 2 SC injection in dogs was similar to what has been previously reported in healthy beagles; there was moderate variability in the population estimates in this clinical population of dogs. © 2018 John Wiley & Sons Ltd.

  13. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  14. Magnetism of Minor Bodies in the Solar System: From 433 Eros, passing Braille, Steins, and Lutetia towards Churyumov-Gerasimenko and 1999 JU3.

    NASA Astrophysics Data System (ADS)

    Hercik, David; Auster, Hans-Ulrich; Heinisch, Philip; Richter, Ingo; Glassmeier, Karl-Heinz

    2015-04-01

    Minor bodies in the solar system, such as asteroids and comets, are important sources of information for our knowledge of the solar system formation. Besides other aspects, estimation of a magnetization state of such bodies might prove important in understanding the early aggregation phases of the protoplanetary disk, showing the level of importance of the magnetic forces in the processes involved. Meteorites' magnetization measurements suggest that primitive bodies consist of magnetized material. However, space observations from various flybys give to date diverse results for a global magnetization estimation. The flybys at Braille and Gaspra indicate possible higher magnetization (~ 10-3 Am2/kg), while flybys at Steins and Lutetia show no significant values in the global field change illustrating low global magnetization. Furthermore, the interpretation of remote (during flybys) measurements is very difficult. For correct estimates on the local magnetization one needs (in the best case) multi-point surface measurements. Single point observation has been done by NEAR-Shoemaker on 433 Eros asteroid, revealing no signature in magnetic field that could have origin in asteroid magnetization. Similar results, no magnetization observed, have been provided by evaluation of recent data from ROMAP (Philae lander) and RPC-MAG (Rosetta orbiter) instruments from comet 67P/Churyumov-Gerasimenko. The ROMAP instrument provided measurements from multiple points of the cometary surface as well as data along ballistic path between multiple touchdowns, which support the conclusion of no global magnetization. However, even in case of the in-situ on surface observations the magnetization estimate has a limiting spatial resolution that is dependent on the distance from the surface (~ 50 cm in case of ROMAP). To get information about possible smaller magnetized grains distribution and magnetization strength, the sensor shall be placed as close as possible to the surface. For such observations the next ideal candidate mission is Hayabusa-II with its Mascot lander equipped with fluxgate magnetometer. The small-sized lander shall deliver the magnetometer within centimeters from the surface, providing measurements on multiple points thanks to a hopping ability. The mission has been recently launched (December 2014) and is aiming to a C-type asteroid 1999 JU3 to reach it in 2018. The results will hopefully add some piece of information to the still unclear question of minor solar system bodies magnetization.

  15. Use of results of microbiological analyses for risk-based control of Listeria monocytogenes in marinated broiler legs.

    PubMed

    Aarnisalo, Kaarina; Vihavainen, Elina; Rantala, Leila; Maijala, Riitta; Suihko, Maija-Liisa; Hielm, Sebastian; Tuominen, Pirkko; Ranta, Jukka; Raaska, Laura

    2008-02-10

    Microbial risk assessment provides a means of estimating consumer risks associated with food products. The methods can also be applied at the plant level. In this study results of microbiological analyses were used to develop a robust single plant level risk assessment. Furthermore, the prevalence and numbers of Listeria monocytogenes in marinated broiler legs in Finland were estimated. These estimates were based on information on the prevalence, numbers and genotypes of L. monocytogenes in 186 marinated broiler legs from 41 retail stores. The products were from three main Finnish producers, which produce 90% of all marinated broiler legs sold in Finland. The prevalence and numbers of L. monocytogenes were estimated by Monte Carlo simulation using WinBUGS, but the model is applicable to any software featuring standard probability distributions. The estimated mean annual number of L. monocytogenes-positive broiler legs sold in Finland was 7.2x10(6) with a 95% credible interval (CI) 6.7x10(6)-7.7x10(6). That would be 34%+/-1% of the marinated broiler legs sold in Finland. The mean number of L. monocytogenes in marinated broiler legs estimated at the sell-by-date was 2 CFU/g, with a 95% CI of 0-14 CFU/g. Producer-specific L. monocytogenes strains were recovered from the products throughout the year, which emphasizes the importance of characterizing the isolates and identifying strains that may cause problems as part of risk assessment studies. As the levels of L. monocytogenes were low, the risk of acquiring listeriosis from these products proved to be insignificant. Consequently there was no need for a thorough national level risk assessment. However, an approach using worst-case and average point estimates was applied to produce an example of single producer level risk assessment based on limited data. This assessment also indicated that the risk from these products was low. The risk-based approach presented in this work can provide estimation of public health risk on which control measures at the plant level can be based.

  16. A self-sensing active magnetic bearing based on a direct current measurement approach.

    PubMed

    Niemann, Andries C; van Schoor, George; du Rand, Carel P

    2013-09-11

    Active magnetic bearings (AMBs) have become a key technology in various industrial applications. Self-sensing AMBs provide an integrated sensorless solution for position estimation, consolidating the sensing and actuating functions into a single electromagnetic transducer. The approach aims to reduce possible hardware failure points, production costs, and system complexity. Despite these advantages, self-sensing methods must address various technical challenges to maximize the performance thereof. This paper presents the direct current measurement (DCM) approach for self-sensing AMBs, denoting the direct measurement of the current ripple component. In AMB systems, switching power amplifiers (PAs) modulate the rotor position information onto the current waveform. Demodulation self-sensing techniques then use bandpass and lowpass filters to estimate the rotor position from the voltage and current signals. However, the additional phase-shift introduced by these filters results in lower stability margins. The DCM approach utilizes a novel PA switching method that directly measures the current ripple to obtain duty-cycle invariant position estimates. Demodulation filters are largely excluded to minimize additional phase-shift in the position estimates. Basic functionality and performance of the proposed self-sensing approach are demonstrated via a transient simulation model as well as a high current (10 A) experimental system. A digital implementation of amplitude modulation self-sensing serves as a comparative estimator.

  17. Volume estimation using food specific shape templates in mobile image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.

    2011-03-01

    As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.

  18. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  19. Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes

    PubMed Central

    Hernández-Montes, Maria del Socorro; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza

    2009-01-01

    Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. This paper presents advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full-field-of-view characterization of nanometer scale sound-induced displacements of the surface of the TM at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical-research environment to address basic-science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad-frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and cadaveric human samples are shown, and their potential utility discussed. PMID:19566316

  20. Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes

    NASA Astrophysics Data System (ADS)

    Del Socorro Hernández-Montes, Maria; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza

    2009-05-01

    Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. We present advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full field-of-view characterization of nanometer-scale sound-induced displacements of the TM surface at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical research environment to address basic science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and human cadaver samples are shown, and their potential utility is discussed.

  1. Adsorption-desorption and hysteresis phenomenon of tebuconazole in Colombian agricultural soils: Experimental assays and mathematical approaches.

    PubMed

    Mosquera-Vivas, Carmen S; Martinez, María J; García-Santos, Glenda; Guerrero-Dallos, Jairo A

    2018-01-01

    The adsorption-desorption, hysteresis phenomenon, and leachability of tebuconazole were studied for Inceptisol and Histosol soils at the surface (0-10 cm) and in the subsurface (40-50 cm) of an agricultural region from Colombia by the batch-equilibrium method and mathematical approaches. The experimental K fa and K d (L kg -1 ) values (7.9-289.2) decreased with depth for the two Inceptisols and increased with depth for the Histosol due to the organic carbon content, aryl and carbonyl carbon types. Single-point and desorption isotherms depended on adsorption reversibility and suggested that tebuconazole showed hysteresis; which can be adequately evaluated with the single-point desorption isotherm and the linear model using the hysteresis index HI. The most suitable mathematical approach to estimate the adsorption isotherms of tebuconazole at the surface and in the subsurface was that considering the combination of the n-octanol-water partition coefficient, pesticide solubility, and the mass-balance concept. Tebuconazole had similar moderate mobility potential as compared with the values of other studies conducted in temperate amended and unamended soils, but the risk of the fungicide to pollute groundwater sources increased when the pesticide reached subsurface soil layers, particularly in the Inceptisols. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Optimal ciliary beating patterns

    NASA Astrophysics Data System (ADS)

    Vilfan, Andrej; Osterman, Natan

    2011-11-01

    We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.

  3. Double versus single stenting for coronary bifurcation lesions: a meta-analysis.

    PubMed

    Katritsis, Demosthenes G; Siontis, George C M; Ioannidis, John P A

    2009-10-01

    Several trials have addressed whether bifurcation lesions require stenting of both the main vessel and side branch, but uncertainty remains on the benefits of such double versus single stenting of the main vessel only. We have conducted a meta-analysis of randomized trials including patients with coronary bifurcation lesions who were randomly selected to undergo percutaneous coronary intervention by either double or single stenting. Six studies (n=1642 patients) were eligible. There was increased risk of myocardial infarction with double stenting (risk ratio, 1.78; P=0.001 by fixed effects; risk ratio, 1.49 with Bayesian meta-analysis). The summary point estimate suggested also an increased risk of stent thrombosis with double stenting, but the difference was not nominally significant given the sparse data (risk ratio, 1.85; P=0.19). No obvious difference was seen for death (risk ratio, 0.81; P=0.66) and target lesion revascularization (risk ratio, 1.09; P=0.67). Stenting of both the main vessel and side branch in bifurcation lesions may increase myocardial infarction and stent thrombosis risk compared with stenting of the main vessel only.

  4. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  5. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  6. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  7. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    PubMed

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  8. Risk-adjusted outcome measurement in pediatric allogeneic stem cell transplantation.

    PubMed

    Matthes-Martin, Susanne; Pötschger, Ulrike; Bergmann, Kirsten; Frommlet, Florian; Brannath, Werner; Bauer, Peter; Klingebiel, Thomas

    2008-03-01

    The purpose of the study was to define a risk score for 1-year treatment-related mortality (TRM) in children undergoing allogeneic stem cell transplantation as a basis for risk-adjusted outcome assessment. We analyzed 1364 consecutive stem cell transplants performed in 24 German and Austrian centers between 1998 and 2003. Five well-established risk factors were tested by multivariate logistic regression for predictive power: patient age, disease status, donor other than matched sibling donor, T cell depletion (TCD), and preceding stem cell transplantation. The risk score was defined by rounding the parameter estimates of the significant risk factors to the nearest integer. Crossvalidation was performed on the basis of 5 randomly extracted equal-sized parts from the database. Additionally, the score was validated for different disease entities and for single centers. Multivariate analysis revealed a significant correlation of TRM with 3 risk factors: age >10 years, advanced disease, and alternative donor. The parameter estimates were 0.76 for age, 0.73 for disease status, and 0.97 for donor type. Rounding the estimates resulted in a score with 1 point for each risk factor. One-year TRM (overall survival [OS]) were 5% (89%) with a score of 0, 18% (74%) with 1, 28% (54%) with 2, and 53% (27%) with 3 points. Crossvalidation showed stable results with a good correlation between predicted and observed mortality but moderate discrimination. The score seems to be a simple instrument to estimate the expected mortality for each risk group and for each center. Measuring TRM risk-adjusted and the comparison between expected and observed mortality may be an additional tool for outcome assessment in pediatric stem cell transplantation.

  9. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter

    PubMed Central

    Hancke, Gerhard P.

    2017-01-01

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). PMID:28934102

  10. Detection of kinetic change points in piece-wise linear single molecule motion

    NASA Astrophysics Data System (ADS)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  11. Determination of the carbon, hydrogen and nitrogen contents of alanine and their uncertainties using the certified reference material L-alanine (NMIJ CRM 6011-a).

    PubMed

    Itoh, Nobuyasu; Sato, Ayako; Yamazaki, Taichi; Numata, Masahiko; Takatsu, Akiko

    2013-01-01

    The carbon, hydrogen, and nitrogen (CHN) contents of alanine and their uncertainties were estimated using a CHN analyzer and the certified reference material (CRM) L-alanine. The CHN contents and their uncertainties, as measured using the single-point calibration method, were 40.36 ± 0.20% for C, 7.86 ± 0.13% for H, and 15.66 ± 0.09% for N; the results obtained using the bracket calibration method were also comparable. The method described in this study is reasonable, convenient, and meets the general requirement of having uncertainties ≤ 0.4%.

  12. Model-OA wind turbine generator - Failure modes and effects analysis

    NASA Technical Reports Server (NTRS)

    Klein, William E.; Lali, Vincent R.

    1990-01-01

    The results failure modes and effects analysis (FMEA) conducted for wind-turbine generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems, which are also reflected in this FMEA.

  13. Striatal dopamine in Parkinson disease: A meta-analysis of imaging studies.

    PubMed

    Kaasinen, Valtteri; Vahlberg, Tero

    2017-12-01

    A meta-analysis of 142 positron emission tomography and single photon emission computed tomography studies that have investigated striatal presynaptic dopamine function in Parkinson disease (PD) was performed. Subregional estimates of striatal dopamine metabolism are presented. The aromatic L-amino-acid decarboxylase (AADC) defect appears to be consistently smaller than the dopamine transporter and vesicular monoamine transporter 2 defects, suggesting upregulation of AADC function in PD. The correlation between disease severity and dopamine loss appears linear, but the majority of longitudinal studies point to a negative exponential progression pattern of dopamine loss in PD. Ann Neurol 2017;82:873-882. © 2017 American Neurological Association.

  14. Model 0A wind turbine generator FMEA

    NASA Technical Reports Server (NTRS)

    Klein, William E.; Lalli, Vincent R.

    1989-01-01

    The results of Failure Modes and Effects Analysis (FMEA) conducted for the Wind Turbine Generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems which are also reflected in this FMEA.

  15. A Comparison of Techniques for Determining Mass Outflow Rates in the Type 2 Quasar Markarian 34

    NASA Astrophysics Data System (ADS)

    Revalski, Mitchell; Crenshaw, D. Michael; Fischer, Travis C.; Kraemer, Steven B.; Schmitt, Henrique R.; Dashtamirova, Dzhuliya; Pope, Crystal L.

    2018-06-01

    We present spatially resolved measurements of the mass outflow rates and energetics for the Narrow Line Region (NLR) outflows in the type 2 quasar Markarian 34. Using data from the Hubble Space Telescope and Apache point observatory, together with Cloudy photoionization models, we calculate the radial mass distribution of ionized gas and map its kinematics. We compare the results of this technique to global outflow rates that characterize NLR outflows with a single outflow rate and energetic measurement. We find that NLR mass estimates based on emission line luminosities produce more consistent results than techniques employing filling factors.

  16. Intramolecular H-transfer reactions in Si 2H n (for n=3-5)

    NASA Astrophysics Data System (ADS)

    Ernst, M. C.; Sax, A. F.; Kalcher, J.

    1993-12-01

    Intramolecular rearrangement reactions for doublet Si 2H 5 and Si 2H 3, quartet Si 2H 3, and singlet Si 2H 4 have been studied. aim of the study was to characterize a series of intramolecular H-transfer reactions in silicon hydrides with vrying degrees of saturation. The transition states belonging to the reactions presented in this work possess a monobridged Si 2H moiety. Structural features of the transition states and relative barrier heights have been examined; the geometry optimizations were performed with the use of CAS-SCF wavefunctions and the barrier height estimates were obtained with single-point CI calculations.

  17. A comparison between temporal and subband minimum variance adaptive beamforming

    NASA Astrophysics Data System (ADS)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar resolution but slightly lower side-lobes and higher contrast for the subband approach at the expense of increased computation time.

  18. Inadequacy of internal covariance estimation for super-sample covariance

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Kunz, Martin

    2017-08-01

    We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.

  19. Measuring temporal summation in visual detection with a single-photon source.

    PubMed

    Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G

    2017-11-01

    Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or <30ms) while the number of photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Magnetic MIMO Signal Processing and Optimization for Wireless Power Transfer

    NASA Astrophysics Data System (ADS)

    Yang, Gang; Moghadam, Mohammad R. Vedady; Zhang, Rui

    2017-06-01

    In magnetic resonant coupling (MRC) enabled multiple-input multiple-output (MIMO) wireless power transfer (WPT) systems, multiple transmitters (TXs) each with one single coil are used to enhance the efficiency of simultaneous power transfer to multiple single-coil receivers (RXs) by constructively combining their induced magnetic fields at the RXs, a technique termed "magnetic beamforming". In this paper, we study the optimal magnetic beamforming design in a multi-user MIMO MRC-WPT system. We introduce the multi-user power region that constitutes all the achievable power tuples for all RXs, subject to the given total power constraint over all TXs as well as their individual peak voltage and current constraints. We characterize each boundary point of the power region by maximizing the sum-power deliverable to all RXs subject to their minimum harvested power constraints. For the special case without the TX peak voltage and current constraints, we derive the optimal TX current allocation for the single-RX setup in closed-form as well as that for the multi-RX setup. In general, the problem is a non-convex quadratically constrained quadratic programming (QCQP), which is difficult to solve. For the case of one single RX, we show that the semidefinite relaxation (SDR) of the problem is tight. For the general case with multiple RXs, based on SDR we obtain two approximate solutions by applying time-sharing and randomization, respectively. Moreover, for practical implementation of magnetic beamforming, we propose a novel signal processing method to estimate the magnetic MIMO channel due to the mutual inductances between TXs and RXs. Numerical results show that our proposed magnetic channel estimation and adaptive beamforming schemes are practically effective, and can significantly improve the power transfer efficiency and multi-user performance trade-off in MIMO MRC-WPT systems.

  1. Active fire detection using a peat fire radiance model

    NASA Astrophysics Data System (ADS)

    Kushida, K.; Honma, T.; Kaku, K.; Fukuda, M.

    2011-12-01

    The fire fractional area and radiances at 4 and 11 μm of active fires in Indonesia were estimated using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. Based on these fire information, a stochastic fire model was used for evaluating two fire detection algorithms of Moderate Resolution Imaging Spectroradiometer (MODIS). One is single-image stochastic fire detection, and the other is multitemporal stochastic fire detection (Kushida, 2010 - IEEE Geosci. Remote Sens. Lett.). The average fire fractional area per one 1 km2 ×1 km2 pixel was 1.7%; this value corresponds to 32% of that of Siberian and Mongolian boreal forest fires. The average radiances at 4 and 11 μm of active fires were 7.2 W/(m2.sr.μm) and 11.1 W/(m2.sr.μm); these values correspond to 47% and 91% of those of Siberian and Mongolian boreal forest fires, respectively. In order to get false alarms less than 20 points per 106 km2 area, for the Siberian and Mongolian boreal forest fires, omission errors (OE) of 50-60% and about 40% were expected for the detections by using the single and multitemporal images, respectively. For Indonesian peat fires, OE of 80-90% was expected for the detections by using the single images. For the peat-fire detections by using the multitemporal images, OE of about 40% was expected, provided that the background radiances were estimated from past multitemporal images with less than the standard deviation of 1K. The analyses indicated that it was difficult to obtain sufficient active-fire information of Indonesian peat fires from single MODIS images for the fire fighting, and that the use of the multitemporal images was important.

  2. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  3. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  4. Estimating Long Term Surface Soil Moisture in the GCIP Area From Satellite Microwave Observations

    NASA Technical Reports Server (NTRS)

    Owe, Manfred; deJeu, Vrije; VandeGriend, Adriaan A.

    2000-01-01

    Soil moisture is an important component of the water and energy balances of the Earth's surface. Furthermore, it has been identified as a parameter of significant potential for improving the accuracy of large-scale land surface-atmosphere interaction models. However, accurate estimates of surface soil moisture are often difficult to make, especially at large spatial scales. Soil moisture is a highly variable land surface parameter, and while point measurements are usually accurate, they are representative only of the immediate site which was sampled. Simple averaging of point values to obtain spatial means often leads to substantial errors. Since remotely sensed observations are already a spatially averaged or areally integrated value, they are ideally suited for measuring land surface parameters, and as such, are a logical input to regional or larger scale land process models. A nine-year database of surface soil moisture is being developed for the Central United States from satellite microwave observations. This region forms much of the GCIP study area, and contains most of the Mississippi, Rio Grande, and Red River drainages. Daytime and nighttime microwave brightness temperatures were observed at a frequency of 6.6 GHz, by the Scanning Multichannel Microwave Radiometer (SMMR), onboard the Nimbus 7 satellite. The life of the SMMR instrument spanned from Nov. 1978 to Aug. 1987. At 6.6 GHz, the instrument provided a spatial resolution of approximately 150 km, and an orbital frequency over any pixel-sized area of about 2 daytime and 2 nighttime passes per week. Ground measurements of surface soil moisture from various locations throughout the study area are used to calibrate the microwave observations. Because ground measurements are usually only single point values, and since the time of satellite coverage does not always coincide with the ground measurements, the soil moisture data were used to calibrate a regional water balance for the top 1, 5, and 10 cm surface layers in order to interpolate daily surface moisture values. Such a climate-based approach is often more appropriate for estimating large-area spatially averaged soil moisture because meteorological data are generally more spatially representative than isolated point measurements of soil moisture. Vegetation radiative transfer characteristics, such as the canopy transmissivity, were estimated from vegetation indices such as the Normalized Difference Vegetation Index (NDVI) and the 37 GHz Microwave Polarization Difference Index (MPDI). Passive microwave remote sensing presents the greatest potential for providing regular spatially representative estimates of surface soil moisture at global scales. Real time estimates should improve weather and climate modelling efforts, while the development of historical data sets will provide necessary information for simulation and validation of long-term climate and global change studies.

  5. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  6. Analyzing latent state-trait and multiple-indicator latent growth curve models as multilevel structural equation models

    PubMed Central

    Geiser, Christian; Bishop, Jacob; Lockhart, Ginger; Shiffman, Saul; Grenard, Jerry L.

    2013-01-01

    Latent state-trait (LST) and latent growth curve (LGC) models are frequently used in the analysis of longitudinal data. Although it is well-known that standard single-indicator LGC models can be analyzed within either the structural equation modeling (SEM) or multilevel (ML; hierarchical linear modeling) frameworks, few researchers realize that LST and multivariate LGC models, which use multiple indicators at each time point, can also be specified as ML models. In the present paper, we demonstrate that using the ML-SEM rather than the SL-SEM framework to estimate the parameters of these models can be practical when the study involves (1) a large number of time points, (2) individually-varying times of observation, (3) unequally spaced time intervals, and/or (4) incomplete data. Despite the practical advantages of the ML-SEM approach under these circumstances, there are also some limitations that researchers should consider. We present an application to an ecological momentary assessment study (N = 158 youths with an average of 23.49 observations of positive mood per person) using the software Mplus (Muthén and Muthén, 1998–2012) and discuss advantages and disadvantages of using the ML-SEM approach to estimate the parameters of LST and multiple-indicator LGC models. PMID:24416023

  7. Assessment of the transportation route of oversize and excessive loads in relation to the load-bearing capacity of existing bridges

    NASA Astrophysics Data System (ADS)

    Doležel, Jiří; Novák, Drahomír; Petrů, Jan

    2017-09-01

    Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.

  8. Probabilistic Analysis of Earthquake-Led Water Contamination: A Case of Sichuan, China

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Li, Lin; Benjamin Zhan, F.; Zhuang, Yanhua

    2016-06-01

    The objective of this paper is to evaluate seismic-led point source and non-point source water pollution, under the seismic hazard of 10 % probability of exceedance in 50 years, and with the minimum value of the water quality standard in Sichuan, China. The soil conservation service curve number method of calculating the runoff depth in the single rainfall event combined with the seismic damage index were applied to estimate the potential degree of non-point source water pollution. To estimate the potential impact of point source water pollution, a comprehensive water pollution evaluation framework is constructed using a combination of Water Quality Index and Seismic Damage Index methods. The four key findings of this paper are: (1) The water catchment that has the highest factory concentration does not have the highest risk of non-point source water contamination induced by the outbreak of potential earthquake. (2) The water catchment that has the highest numbers of cumulative water pollutants types are typically located in the south western parts of Sichuan where the main river basins in the regions flow through. (3) The most common pollutants in sample factories studied is COD and NH3-N which are found in all catchments. The least common pollutant is pathogen—found present in W1 catchment which has the best rating in the water quality index. (4) Using water quality index as a standardization parameter, parallel comparisons is made among the 16 water catchments. Only catchment W1 reaches level II water quality status which has the rating of moderately polluted in events of earthquake induced water contamination. All other areas suffer from severe water contamination with multiple pollution sources. The results from the data model are significant to urban planning commissions and businesses to strategically choose their factory locations in order to minimize potential hazardous impact during the outbreak of earthquake.

  9. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  10. Pairing call-response surveys and distance sampling for a mammalian carnivore

    USGS Publications Warehouse

    Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.

    2015-01-01

    Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.

  11. Estimating multivariate response surface model with data outliers, case study in enhancing surface layer properties of an aircraft aluminium alloy

    NASA Astrophysics Data System (ADS)

    Widodo, Edy; Kariyam

    2017-03-01

    To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.

  12. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  13. SEM Based CARMA Time Series Modeling for Arbitrary N.

    PubMed

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  14. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  15. Effect of manmade pixels on the inherent dimension of natural material distributions

    NASA Astrophysics Data System (ADS)

    Schlamm, Ariel; Messinger, David; Basener, William

    2009-05-01

    The inherent dimension of hyperspectral data may be a useful metric for discriminating between the presence of manmade and natural materials in a scene without reliance on spectral signatures take from libraries. Previously, a simple geometric method for approximating the inherent dimension was introduced along with results from application to single material clusters. This method uses an estimate of the slope from a graph based on the point density estimation in the spectral space. Other information can be gathered from the plot which may aid in the discrimination between manmade and natural materials. In order to use these measures to differentiate between the two material types, the effect of the inclusion of manmade pixels on the phenomenology of the background distribution must be evaluated. Here, a procedure for injecting manmade pixels into a natural region of a scene is discussed. The results of dimension estimation on natural scenes with varying amounts of manmade pixels injected are presented here, indicating that these metrics can be sensitive to the presence of manmade phenomenology in an image.

  16. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  17. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting.

    PubMed

    Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F

    2010-07-19

    A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.

  18. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    PubMed Central

    2010-01-01

    Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827

  19. On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra

    NASA Technical Reports Server (NTRS)

    Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.

    2007-01-01

    This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.

  20. Improved Estimates of Thermodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  1. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Growth, crystalline perfection, spectral, thermal and theoretical studies on imidazolium L-tartrate crystals.

    PubMed

    Meena, K; Muthu, K; Meenatchi, V; Rajasekar, M; Bhagavannarayana, G; Meenakshisundaram, S P

    2014-04-24

    Transparent optical quality single crystals of imidazolium L-tartrate (IMLT) were grown by conventional slow evaporation solution growth technique. Crystal structure of the as-grown IMLT was determined by single crystal X-ray diffraction analysis. Thermal analysis reveals the purity of the crystal and the sample is stable up to the melting point. Good transmittance in the visible region is observed and the band gap energy is estimated using diffuse reflectance data by the application of Kubelka-Munk algorithm. The powder X-ray diffraction study reveals the crystallinity of the as-grown crystal and it is compared with that of the experimental one. An additional peak in high resolution X-ray diffraction (HRXRD) indicates the presence of an internal structural low angle boundary. Second harmonic generation (SHG) activity of IMLT is significant as estimated by Kurtz and Perry powder technique. HOMO-LUMO energies and first-order molecular hyperpolarizability of IMLT have been evaluated using density functional theory (DFT) employing B3LYP functional and 6-31G(d,p) basis set. The optimized geometry closely resembles the ORTEP. The vibrational patterns present in the molecule are confirmed by FT-IR coinciding with theoretical patterns. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Bayesian pedigree inference with small numbers of single nucleotide polymorphisms via a factor-graph representation.

    PubMed

    Anderson, Eric C; Ng, Thomas C

    2016-02-01

    We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.

  4. Can we settle with single-band radiometric temperature monitoring during hyperthermia treatment of chestwall recurrence of breast cancer using a dual-mode transceiving applicator?

    PubMed

    Jacobsen, Svein; Stauffer, Paul R

    2007-02-21

    The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.

  5. Can we settle with single-band radiometric temperature monitoring during hyperthermia treatment of chestwall recurrence of breast cancer using a dual-mode transceiving applicator?

    NASA Astrophysics Data System (ADS)

    Jacobsen, Svein; Stauffer, Paul R.

    2007-02-01

    The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.

  6. How Affiliation Disclosure and Control Over User-Generated Comments Affects Consumer Health Knowledge and Behavior: A Randomized Controlled Experiment of Pharmaceutical Direct-to-Consumer Advertising on Social Media

    PubMed Central

    Vendemia, Megan Ashley

    2016-01-01

    Background More people are seeking health information online than ever before and pharmaceutical companies are increasingly marketing their drugs through social media. Objective The aim was to examine two major concerns related to online direct-to-consumer pharmaceutical advertising: (1) how disclosing an affiliation with a pharmaceutical company affects how people respond to drug information produced by both health organizations and online commenters, and (2) how knowledge that health organizations control the display of user-generated comments affects consumer health knowledge and behavior. Methods We conducted a 2×2×2 between-subjects experiment (N=674). All participants viewed an infographic posted to Facebook by a health organization about a prescription allergy drug. Across conditions, the infographic varied in the degree to which the health organization and commenters appeared to be affiliated with a drug manufacturer, and the display of user-generated comments appeared to be controlled. Results Affiliation disclosure statements on a health organization’s Facebook post increased perceptions of an organization-drug manufacturer connection, which reduced trust in the organization (point estimate –0.45, 95% CI –0.69 to –0.24) and other users who posted comments about the drug (point estimate –0.44, 95% CI –0.68 to –0.22). Furthermore, increased perceptions of an organization-manufacturer connection reduced the likelihood that people would recommend the drug to important others (point estimate –0.35, 95% CI –0.59 to –0.15), and share the drug post with others on Facebook (point estimate –0.37, 95% CI –0.64 to –0.16). An affiliation cue next to the commenters' names increased perceptions that the commenters were affiliated with the drug manufacturer, which reduced trust in the comments (point estimate –0.81, 95% CI –1.04 to –0.59), the organization that made the post (point estimate –0.68, 95% CI –0.90 to –0.49), the likelihood of participants recommending the drug (point estimate –0.61, 95% CI –0.82 to –0.43), and sharing the post with others on Facebook (point estimate –0.63, 95% CI –0.87 to –0.43). Cues indicating that a health organization removed user-generated comments from a post increased perceptions that the drug manufacturer influenced the display of the comments, which negatively affected trust in the comments (point estimate –0.35, 95% CI –0.53 to –0.20), the organization (point estimate –0.31, 95% CI –0.47 to –0.17), the likelihood of recommending the drug (point estimate –0.26, 95% CI –0.41 to –0.14), and the likelihood of sharing the post with others on Facebook (point estimate –0.28, 95% CI –0.45 to –0.15). (All estimates are unstandardized indirect effects and 95% bias-corrected bootstrap confidence intervals.) Conclusions Concern over pharmaceutical companies hiding their affiliations and strategically controlling user-generated comments is well founded; these practices can greatly affect not only how viewers evaluate drug information online, but also how likely they are to propagate the information throughout their online and offline social networks. PMID:27435883

  7. Predictive value of 3-month lumbar discectomy outcomes in the NeuroPoint-SD Registry.

    PubMed

    Whitmore, Robert G; Curran, Jill N; Ali, Zarina S; Mummaneni, Praveen V; Shaffrey, Christopher I; Heary, Robert F; Kaiser, Michael G; Asher, Anthony L; Malhotra, Neil R; Cheng, Joseph S; Hurlbert, John; Smith, Justin S; Magge, Subu N; Steinmetz, Michael P; Resnick, Daniel K; Ghogawala, Zoher

    2015-10-01

    The authors have established a multicenter registry to assess the efficacy and costs of common lumbar spinal procedures using prospectively collected outcomes. Collection of these data requires an extensive commitment of resources from each site. The aim of this study was to determine whether outcomes data from shorter-interval follow-up could be used to accurately estimate long-term outcome following lumbar discectomy. An observational prospective cohort study was completed at 13 academic and community sites. Patients undergoing single-level lumbar discectomy for treatment of disc herniation were included. SF-36 and Oswestry Disability Index (ODI) data were obtained preoperatively and at 1, 3, 6, and 12 months postoperatively. Quality-adjusted life year (QALY) data were calculated using SF-6D utility scores. Correlations among outcomes at each follow-up time point were tested using the Spearman rank correlation test. One hundred forty-eight patients were enrolled over 1 year. Their mean age was 46 years (49% female). Eleven patients (7.4%) required a reoperation by 1 year postoperatively. The overall 1-year follow-up rate was 80.4%. Lumbar discectomy was associated with significant improvements in ODI and SF-36 scores (p < 0.0001) and with a gain of 0.246 QALYs over the 1-year study period. The greatest gain occurred between baseline and 3-month follow-up and was significantly greater than improvements obtained between 3 and 6 months or 6 months and 1 year(p < 0.001). Correlations between 3-month, 6-month, and 1-year outcomes were similar, suggesting that 3-month data may be used to accurately estimate 1-year outcomes for patients who do not require a reoperation. Patients who underwent reoperation had worse outcomes scores and nonsignificant correlations at all time points. This national spine registry demonstrated successful collection of high-quality outcomes data for spinal procedures in actual practice. Three-month outcome data may be used to accurately estimate outcome at future time points and may lower costs associated with registry data collection. This registry effort provides a practical foundation for the acquisition of outcome data following lumbar discectomy.

  8. Heat fluxes across the Antarctic Circumpolar Current

    NASA Astrophysics Data System (ADS)

    Ferrari, Ramiro; Provost, Christine; Hyang Park, Young; Sennéchael, Nathalie; Garric, Gilles; Bourdallé-Badie, Romain

    2014-05-01

    Determining the processes responsible for the Southern Ocean heat balance is fundamental to our understanding of the weather and climate systems. Therefore, in the last decades, various studies aimed at analyzing the major mechanisms of the oceanic poleward heat flux in this region. Previous works stipulated that the cross-stream heat flux due to the mesoscale transient eddies was responsible for the total meridional heat transport across the Antarctic Circumpolar Current (ACC). Several numerical modelling and current meters data studies have recently challenged this idea. These showed that the heat flux due to the mean flow in the southern part of the Antarctic Circumpolar Current could be larger than the eddy heat flux contribution by two orders of magnitude. Eddy heat flux and heat flux by the mean flow distributions of were examined in Drake Passage using in situ measurements collected during the DRAKE 2006-9 project (from January 2006 to March 2009), available observations from the historical DRAKE 79 experiment and high resolution model outputs (ORCA 12, MERCATOR). The Drake Passage estimations provided a limited view of heat transport in the Southern Ocean. The small spatial scales shown by the model derived heat flux by the mean flow indicate that circumpolar extrapolations from a single point observation are perilous. The importance of the heat flux due by the mean flow should be further investigated using other in situ observations and numerical model outputs. Similar situation has been observed, with important implication for heat flux due to the mean flow, in other topographically constricted regions with strong flow across prominent submarine ridges (choke points). We have estimated the heat flux due to the mean flow revisiting other ACC mooring sites where in situ time series are available, e.g. south of Australia (Tasmania) (Phillips and Rintoul, 2000), southeast of New Zealand (Campbell Plateau) (Bryden and Heath, 1985). Heat fluxes due to the mean flow at those choke points were compared to model outputs and provided new circumpolar estimates indicating that the choke points are a potential overwhelming contribution for the heat flux needed to balance heat lost to the atmosphere in the Southern Ocean.

  9. Estimating snow depth in real time using unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Mizinski, Bartlomiej; Witek, Matylda; Spallek, Waldemar; Szymanowski, Mariusz

    2016-04-01

    In frame of the project no. LIDER/012/223/L-5/13/NCBR/2014, financed by the National Centre for Research and Development of Poland, we elaborated a fully automated approach for estimating snow depth in real time in the field. The procedure uses oblique aerial photographs taken by the unmanned aerial vehicle (UAV). The geotagged images of snow-covered terrain are processed by the Structure-from-Motion (SfM) method which is used to produce a non-georeferenced dense point cloud. The workflow includes the enhanced RunSFM procedure (keypoint detection using the scale-invariant feature transform known as SIFT, image matching, bundling using the Bundler, executing the multi-view stereo PMVS and CMVS2 software) which is preceded by multicore image resizing. The dense point cloud is subsequently automatically georeferenced using the GRASS software, and the ground control points are borrowed from positions of image centres acquired from the UAV-mounted GPS receiver. Finally, the digital surface model (DSM) is produced which - to improve the accuracy of georeferencing - is shifted using a vector obtained through precise geodetic GPS observation of a single ground control point (GCP) placed on the Laboratory for Unmanned Observations of Earth (mobile lab established at the University of Wroclaw, Poland). The DSM includes snow cover and its difference with the corresponding snow-free DSM or digital terrain model (DTM), following the concept of the digital elevation model of differences (DOD), produces a map of snow depth. Since the final result depends on the snow-free model, two experiments are carried out. Firstly, we show the performance of the entire procedure when the snow-free model reveals a very high resolution (3 cm/px) and is produced using the UAV-taken photographs and the precise GCPs measured by the geodetic GPS receiver. Secondly, we perform a similar exercise but the 1-metre resolution light detection and ranging (LIDAR) DSM or DTM serves as the snow-free model. Thus, the main objective of the paper is to present the performance of the new procedure for estimating snow depth and to compare the two experiments.

  10. The validity of multiphase DNS initialized on the basis of single--point statistics

    NASA Astrophysics Data System (ADS)

    Subramaniam, Shankar

    1999-11-01

    A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.

  11. Spatiotemporal variability of carbon dioxide and methane in a eutrophic lake

    NASA Astrophysics Data System (ADS)

    Loken, Luke; Crawford, John; Schramm, Paul; Stadler, Philipp; Stanley, Emily

    2017-04-01

    Lakes are important regulators of global carbon cycling and conduits of greenhouse gases to the atmosphere; however, most efflux estimates for individual lakes are based on extrapolations from a single location. Within-lake variability in carbon dioxide (CO2) and methane (CH4) arises from differences in water sources, physical mixing, and local transformations; all of which can be influenced by anthropogenic disturbances and vary at multiple temporal and spatial scales. During the 2016 open water season (March - December), we mapped surface water concentrations of CO2 and CH4 weekly in a eutrophic lake (Lake Mendota, WI, USA), which has a predominately agricultural and urban watershed. In total we produced 26 maps of each gas based on 10,000 point measurements distributed across the lake surface. Both gases displayed relatively consistent spatial patterns over the stratified period but exhibited remarkable heterogeneity on each sample date. CO2 was generally undersaturated (global mean: 0.84X atmospheric saturation) throughout the lake's pelagic zone and often differed near river inlets and shorelines. The lake was routinely extremely supersaturated with CH4 (global mean: 105X atmospheric saturation) with greater concentrations in littoral areas that contained organic-rich sediments. During fall mixis, both CO2 and CH4 increased substantially, and concentrations were not uniform across the lake surface. CO2 and CH4 were higher on the upwind side of the lake due to upwelling of enriched hypolimnetic water. While the lake acted as a modest sink for atmospheric CO2 during the stratified period, the lake released substantial amounts of CO2 during turnover and continually emitted CH4, offsetting any reduction in atmospheric warming potential from summertime CO2 uptake. These data-rich maps illustrate how lake-wide surface concentrations and lake-scale efflux estimates based on single point measurements diverge from spatially weighted calculations. Both gases are not well represented by a sample collected at lake's central buoy, and thus, extrapolations from a single sampling location may not be adequate to assess lake-wide CO2 and CH4 dynamics in human-dominated landscapes.

  12. Development of Jet Noise Power Spectral Laws

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2011-01-01

    High-quality jet noise spectral data measured at the Aero-Acoustic Propulsion Laboratory (AAPL) at NASA Glenn is used to develop jet noise scaling laws. A FORTRAN algorithm was written that provides detailed spectral prediction of component jet noise at user-specified conditions. The model generates quick estimates of the jet mixing noise and the broadband shock-associated noise (BBSN) in single-stream, axis-symmetric jets within a wide range of nozzle operating conditions. Shock noise is emitted when supersonic jets exit a nozzle at imperfectly expanded conditions. A successful scaling of the BBSN allows for this noise component to be predicted in both convergent and convergent-divergent nozzles. Configurations considered in this study consisted of convergent and convergent- divergent nozzles. Velocity exponents for the jet mixing noise were evaluated as a function of observer angle and jet temperature. Similar intensity laws were developed for the broadband shock-associated noise in supersonic jets. A computer program called sJet was developed that provides a quick estimate of component noise in single-stream jets at a wide range of operating conditions. A number of features have been incorporated into the data bank and subsequent scaling in order to improve jet noise predictions. Measurements have been converted to a lossless format. Set points have been carefully selected to minimize the instability-related noise at small aft angles. Regression parameters have been scrutinized for error bounds at each angle. Screech-related amplification noise has been kept to a minimum to ensure that the velocity exponents for the jet mixing noise remain free of amplifications. A shock-noise-intensity scaling has been developed independent of the nozzle design point. The computer program provides detailed narrow-band spectral predictions for component noise (mixing noise and shock associated noise), as well as the total noise. Although the methodology is confined to single streams, efforts are underway to generate a data bank and algorithm applicable to dual-stream jets. Shock-associated noise in high-powered jets such as military aircraft can benefit from these predictions.

  13. Representativeness of the ground observational sites and up-scaling of the point soil moisture measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jinlei; Wen, Jun; Tian, Hui

    2016-02-01

    Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.

  14. Estimating one's own and one's relatives' multiple intelligence: a study from Argentina.

    PubMed

    Furnham, Adrian; Chamorro-Premuzic, Tomas

    2005-05-01

    Participants from Argentina (N = 217) estimated their own, their partner's, their parents' and their grandparents' overall and multiple intelligences. The Argentinean data showed that men gave higher overall estimates than women (M = 110.4 vs. 105.1) as well as higher estimates on mathematical and spatial intelligence. Participants thought themselves slightly less bright than their fathers (2 IQ points) but brighter than their mothers (6 points), their grandfathers (8 points), but especially their grandmothers (11 points). Regressions showed that participants thought verbal and mathematical IQ to be the best predictors of overall IQ. Results were broadly in agreement with other studies in the area. A comparison was also made with British data using the same questionnaire. British participants tended to give significantly higher self-estimates than for relatives, though the pattern was generally similar. Results are discussed in terms of the studies in the field.

  15. Optimal Background Estimators in Single-Molecule FRET Microscopy.

    PubMed

    Preus, Søren; Hildebrandt, Lasse L; Birkedal, Victoria

    2016-09-20

    Single-molecule total internal reflection fluorescence (TIRF) microscopy constitutes an umbrella of powerful tools that facilitate direct observation of the biophysical properties, population heterogeneities, and interactions of single biomolecules without the need for ensemble synchronization. Due to the low signal/noise ratio in single-molecule TIRF microscopy experiments, it is important to determine the local background intensity, especially when the fluorescence intensity of the molecule is used quantitatively. Here we compare and evaluate the performance of different aperture-based background estimators used particularly in single-molecule Förster resonance energy transfer. We introduce the general concept of multiaperture signatures and use this technique to demonstrate how the choice of background can affect the measured fluorescence signal considerably. A new, to our knowledge, and simple background estimator is proposed, called the local statistical percentile (LSP). We show that the LSP background estimator performs as well as current background estimators at low molecular densities and significantly better in regions of high molecular densities. The LSP background estimator is thus suited for single-particle TIRF microscopy of dense biological samples in which the intensity itself is an observable of the technique. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Evaluation of a Single-Beam Sonar System to Map Seagrass at Two Sites in Northern Puget Sound, Washington

    USGS Publications Warehouse

    Stevens, Andrew W.; Lacy, Jessica R.; Finlayson, David P.; Gelfenbaum, Guy

    2008-01-01

    Seagrass at two sites in northern Puget Sound, Possession Point and nearby Browns Bay, was mapped using both a single-beam sonar and underwater video camera. The acoustic and underwater video data were compared to evaluate the accuracy of acoustic estimates of seagrass cover. The accuracy of the acoustic method was calculated for three classifications of seagrass observed in underwater video: bare (no seagrass), patchy seagrass, and continuous seagrass. Acoustic and underwater video methods agreed in 92 percent and 74 percent of observations made in bare and continuous areas, respectively. However, in patchy seagrass, the agreement between acoustic and underwater video was poor (43 percent). The poor agreement between the two methods in areas with patchy seagrass is likely because the two instruments were not precisely colocated. The distribution of seagrass at the two sites differed both in overall percent vegetated and in the distribution of percent cover versus depth. On the basis of acoustic data, seagrass inhabited 0.29 km2 (19 percent of total area) at Possession Point and 0.043 km2 (5 percent of total area) at the Browns Bay study site. The depth distribution at the two sites was markedly different. Whereas the majority of seagrass at Possession Point occurred between -0.5 and -1.5 m MLLW, most seagrass at Browns Bay occurred at a greater depth, between -2.25 and -3.5 m MLLW. Further investigation of the anthropogenic and natural factors causing these differences in distribution is needed.

  17. Assessment Study of Using Online (CSRS) GPS-PPP Service for Mapping Applications in Egypt

    NASA Astrophysics Data System (ADS)

    Abd-Elazeem, Mohamed; Farah, Ashraf; Farrag, Farrag

    2011-09-01

    Many applications in navigation, land surveying, land title definitions and mapping have been made simpler and more precise due to accessibility of Global Positioning System (GPS) data, and thus the demand for using advanced GPS techniques in surveying applications has become essential. The differential technique was the only source of accurate positioning for many years, and remained in use despite of its cost. The precise point positioning (PPP) technique is a viable alternative to the differential positioning method in which a user with a single receiver can attain positioning accuracy at the centimeter or decimeter scale. In recent years, many organizations introduced online (GPS-PPP) processing services capable of determining accurate geocentric positions using GPS observations. These services provide the user with receiver coordinates in free and unlimited access formats via the internet. This paper investigates the accuracy of the Canadian Spatial Reference System (CSRS) Precise Point Positioning (PPP) (CSRS-PPP) service supervised by the Geodetic Survey Division (GSD), Canada. Single frequency static GPS observations have been collected at three points covering time spans of 60, 90 and 120 minutes. These three observed sites form baselines of 1.6, 7, and 10 km, respectively. In order to assess the CSRS-PPP accuracy, the discrepancies between the CSRS-PPP estimates and the regular differential GPS solutions were computed. The obtained results illustrate that the PPP produces a horizontal error at the scale of a few decimeters; this is accurate enough to serve many mapping applications in developing countries with a savings in both cost and experienced labor.

  18. Compatible Basal Area and Number of Trees Estimators from Remeasured Horizontal Point Samples

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1989-01-01

    Compatible groups of estimators for total value at time 1 (V1), survivor growth (S), and ingrowth (I) for use with permanent horizontal point samples are evaluated for the special cases of estimating the change in both the number of trees and basal area. Caveats which should be observed before any one compatible grouping of estimators is chosen...

  19. Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States

    USGS Publications Warehouse

    Puckett, Larry J.

    1994-01-01

    Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.

  20. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  1. SEE Transient Response of Crane Interpoint Single Output Point of Load DC-DC Converters

    NASA Technical Reports Server (NTRS)

    Sanders, Anthony B.; Chen, Dakai; Kim, Hak S.; Phan, Anthony M.

    2011-01-01

    This study was undertaken to determine the single event effect and transient susceptibility of the Crane Interpoint Maximum Flexible Power (MFP) Single Output Point of Load DC/DC Converters for transient interruptions in the output signal and for destructive and non destructive events induced by exposing it to a heavy ion beam..

  2. A universal approximation to grain size from images of non-cohesive sediment

    USGS Publications Warehouse

    Buscombe, D.; Rubin, D.M.; Warrick, J.A.

    2010-01-01

    The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a “universal approximation” because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.

  3. A universal approximation of grain size from images of noncohesive sediment

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.; Warrick, J. A.

    2010-06-01

    The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a "universal approximation" because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.

  4. Aggregate exposure approaches for parabens in personal care products: a case assessment for children between 0 and 3 years old

    PubMed Central

    Gosens, Ilse; Delmaar, Christiaan J E; ter Burg, Wouter; de Heer, Cees; Schuur, A Gerlienke

    2014-01-01

    In the risk assessment of chemical substances, aggregation of exposure to a substance from different sources via different pathways is not common practice. Focusing the exposure assessment on a substance from a single source can lead to a significant underestimation of the risk. To gain more insight on how to perform an aggregate exposure assessment, we applied a deterministic (tier 1) and a person-oriented probabilistic approach (tier 2) for exposure to the four most common parabens through personal care products in children between 0 and 3 years old. Following a deterministic approach, a worst-case exposure estimate is calculated for methyl-, ethyl-, propyl- and butylparaben. As an illustration for risk assessment, Margins of Exposure (MoE) are calculated. These are 991 and 4966 for methyl- and ethylparaben, and 8 and 10 for propyl- and butylparaben, respectively. In tier 2, more detailed information on product use has been obtained from a small survey on product use of consumers. A probabilistic exposure assessment is performed to estimate the variability and uncertainty of exposure in a population. Results show that the internal exposure for each paraben is below the level determined in tier 1. However, for propyl- and butylparaben, the percentile of the population with an exposure probability above the assumed “safe” MoE of 100, is 13% and 7%, respectively. In conclusion, a tier 1 approach can be performed using simple equations and default point estimates, and serves as a starting point for exposure and risk assessment. If refinement is warranted, the more data demanding person-oriented probabilistic approach should be used. This probabilistic approach results in a more realistic exposure estimate, including the uncertainty, and allows determining the main drivers of exposure. Furthermore, it allows to estimate the percentage of the population for which the exposure is likely to be above a specific value. PMID:23801276

  5. Application of multiple modelling to hyperthermia estimation: reducing the effects of model mismatch.

    PubMed

    Potocki, J K; Tharp, H S

    1993-01-01

    Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.

  6. Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.

    PubMed

    Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2014-11-25

    In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.

  7. Speed of sound estimation for dual-stage virtual source ultrasound beamforming using point scatterers

    NASA Astrophysics Data System (ADS)

    Ma, Manyou; Rohling, Robert; Lampe, Lutz

    2017-03-01

    Synthetic transmit aperture beamforming is an increasingly used method to improve resolution in biomedical ultrasound imaging. Synthetic aperture sequential beamforming (SASB) is an implementation of this concept which features a relatively low computation complexity. Moreover, it can be implemented in a dual-stage architecture, where the first stage only applies simple single receive-focused delay-and-sum (srDAS) operations, while the second, more complex stage is performed either locally or remotely using more powerful processing. However, like traditional DAS-based beamforming methods, SASB is susceptible to inaccurate speed-of-sound (SOS) information. In this paper, we show how SOS estimation can be implemented using the srDAS beamformed image, and integrated into the dual-stage implementation of SASB, in an effort to obtain high resolution images with relatively low-cost hardware. Our approach builds on an existing per-channel radio frequency data-based direct estimation method, and applies an iterative refinement of the estimate. We use this estimate for SOS compensation, without the need to repeat the first stage beamforming. The proposed and previous methods are tested on both simulation and experimental studies. The accuracy of our SOS estimation method is on average 0.38% in simulation studies and 0.55% in phantom experiments, when the underlying SOS in the media is within the range 1450-1620 m/s. Using the estimated SOS, the beamforming lateral resolution of SASB is improved on average 52.6% in simulation studies and 50.0% in phantom experiments.

  8. Soil organic carbon stocks in Alaska estimated with spatial and pedon data

    USGS Publications Warehouse

    Bliss, Norman B.; Maursetter, J.

    2010-01-01

    Temperatures in high-latitude ecosystems are increasing faster than the average rate of global warming, which may lead to a positive feedback for climate change by increasing the respiration rates of soil organic C. If a positive feedback is confirmed, soil C will represent a source of greenhouse gases that is not currently considered in international protocols to regulate C emissions. We present new estimates of the stocks of soil organic C in Alaska, calculated by linking spatial and field data developed by the USDA NRCS. The spatial data are from the State Soil Geographic database (STATSGO), and the field and laboratory data are from the National Soil Characterization Database, also known as the pedon database. The new estimates range from 32 to 53 Pg of soil organic C for Alaska, formed by linking the spatial and field data using the attributes of Soil Taxonomy. For modelers, we recommend an estimation method based on taxonomic subgroups with interpolation for missing areas, which yields an estimate of 48 Pg. This is a substantial increase over a magnitude of 13 Pg estimated from only the STATSGO data as originally distributed in 1994, but the increase reflects different estimation methods and is not a measure of the change in C on the landscape. Pedon samples were collected between 1952 and 2002, so the results do not represent a single point in time. The linked databases provide an improved basis for modeling the impacts of climate change on net ecosystem exchange.

  9. Efficacy, safety and outcome of frameless image-guided robotic radiosurgery for brain metastases after whole brain radiotherapy.

    PubMed

    Lohkamp, Laura-Nanna; Vajkoczy, Peter; Budach, Volker; Kufeld, Markus

    2018-05-01

    Estimating efficacy, safety and outcome of frameless image-guided robotic radiosurgery for the treatment of recurrent brain metastases after whole brain radiotherapy (WBRT). We performed a retrospective single-center analysis including patients with recurrent brain metastases after WBRT, who have been treated with single session radiosurgery, using the CyberKnife® Radiosurgery System (CKRS) (Accuray Inc., CA) between 2011 and 2016. The primary end point was local tumor control, whereas secondary end points were distant tumor control, treatment-related toxicity and overall survival. 36 patients with 140 recurrent brain metastases underwent 46 single session CKRS treatments. Twenty one patients had multiple brain metastases (58%). The mean interval between WBRT and CKRS accounted for 2 years (range 0.2-7 years). The median number of treated metastases per treatment session was five (range 1-12) with a tumor volume of 1.26 ccm (mean) and a median tumor dose of 18 Gy prescribed to the 70% isodose line. Two patients experienced local tumor recurrence within the 1st year after treatment and 13 patients (36%) developed novel brain metastases. Nine of these patients underwent additional one to three CKRS treatments. Eight patients (22.2%) showed treatment-related radiation reactions on MRI, three with clinical symptoms. Median overall survival was 19 months after CKRS. The actuarial 1-year local control rate was 94.2%. CKRS has proven to be locally effective and safe due to high local tumor control rates and low toxicity. Thus CKRS offers a reliable salvage treatment option for recurrent brain metastases after WBRT.

  10. Electrically generated eddies at an eightfold stagnation point within a nanopore

    PubMed Central

    Sherwood, J. D.; Mao, M.; Ghosal, S.

    2014-01-01

    Electrically generated flows around a thin dielectric plate pierced by a cylindrical hole are computed numerically. The geometry represents that of a single nanopore in a membrane. When the membrane is uncharged, flow is due solely to induced charge electroosmosis, and eddies are generated by the high fields at the corners of the nanopore. These eddies meet at stagnation points. If the geometry is chosen correctly, the stagnation points merge to form a single stagnation point at which four streamlines cross at a point and eight eddies meet. PMID:25489206

  11. Investigation of speed estimation using single loop detectors.

    DOT National Transportation Integrated Search

    2008-05-15

    The ability to collect or estimate accurate speed information is of great importance to a large number of : Intelligent Transportation Systems (ITS) applications. Estimating speeds from the widely used single : inductive loop sensor has been a diffic...

  12. Optimal estimates of the diffusion coefficient of a single Brownian trajectory.

    PubMed

    Boyer, Denis; Dean, David S; Mejía-Monasterio, Carlos; Oshanin, Gleb

    2012-03-01

    Modern developments in microscopy and image processing are revolutionizing areas of physics, chemistry, and biology as nanoscale objects can be tracked with unprecedented accuracy. The goal of single-particle tracking is to determine the interaction between the particle and its environment. The price paid for having a direct visualization of a single particle is a consequent lack of statistics. Here we address the optimal way to extract diffusion constants from single trajectories for pure Brownian motion. It is shown that the maximum likelihood estimator is much more efficient than the commonly used least-squares estimate. Furthermore, we investigate the effect of disorder on the distribution of estimated diffusion constants and show that it increases the probability of observing estimates much smaller than the true (average) value.

  13. Coalescent Inference Using Serially Sampled, High-Throughput Sequencing Data from Intrahost HIV Infection

    PubMed Central

    Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.

    2016-01-01

    Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628

  14. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  15. Zero-Point Calibration for AGN Black-Hole Mass Estimates

    NASA Technical Reports Server (NTRS)

    Peterson, B. M.; Onken, C. A.

    2004-01-01

    We discuss the measurement and associated uncertainties of AGN reverberation-based black-hole masses, since these provide the zero-point calibration for scaling relationships that allow black-hole mass estimates for quasars. We find that reverberation-based mass estimates appear to be accurate to within a factor of about 3.

  16. Control Variate Estimators of Survivor Growth from Point Samples

    Treesearch

    Francis A. Roesch; Paul C. van Deusen

    1993-01-01

    Two estimators of the control variate type for survivor growth from remeasured point samples are proposed and compared with more familiar estimators. The large reductionsin variance, observed in many cases forestimators constructed with control variates, arealso realized in thisapplication. A simulation study yielded consistent reductions in variance which were often...

  17. Quantitative estimation of bioclimatic parameters from presence/absence vegetation data in North America by the modern analog technique

    USGS Publications Warehouse

    Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.

    2008-01-01

    The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate bioclimatic conditions represented by the plant macrofossil assemblage from a packrat midden of Last Glacial Maximum age from southern Nevada. In general, the estimation approaches produced similar results in regard to moisture conditions, but there was a greater range of estimates for growing-degree days. Despite its limitations, the modern analog technique can provide paleoclimatic reconstructions that serve as the starting point to the interpretation of past climatic conditions.

  18. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  19. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  20. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

Top