Sample records for obtain initial estimates

  1. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  2. Initial dynamic load estimates during configuration design

    NASA Technical Reports Server (NTRS)

    Schiff, Daniel

    1987-01-01

    This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.

  3. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the

  4. Estimation of the Arrival Time and Duration of a Radio Signal with Unknown Amplitude and Initial Phase

    NASA Astrophysics Data System (ADS)

    Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.

    2018-05-01

    We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.

  5. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  6. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  7. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  8. 42 CFR 433.114 - Procedures for obtaining initial approval; notice of decision.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Mechanized Claims Processing and Information Retrieval Systems § 433.114 Procedures for obtaining initial... the system, the notice will include all of the following information: (1) The findings of fact upon...

  9. 42 CFR 433.114 - Procedures for obtaining initial approval; notice of decision.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Mechanized Claims Processing and Information Retrieval Systems § 433.114 Procedures for obtaining initial... the system, the notice will include all of the following information: (1) The findings of fact upon...

  10. 42 CFR 433.114 - Procedures for obtaining initial approval; notice of decision.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Mechanized Claims Processing and Information Retrieval Systems § 433.114 Procedures for obtaining initial... the system, the notice will include all of the following information: (1) The findings of fact upon...

  11. 42 CFR 433.114 - Procedures for obtaining initial approval; notice of decision.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Mechanized Claims Processing and Information Retrieval Systems § 433.114 Procedures for obtaining initial... the system, the notice will include all of the following information: (1) The findings of fact upon...

  12. Curvature estimation for multilayer hinged structures with initial strains

    NASA Astrophysics Data System (ADS)

    Nikishkov, G. P.

    2003-10-01

    Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.

  13. Correcting for initial Th in speleothems to obtain the age of calcite nucleation after a growth hiatus

    NASA Astrophysics Data System (ADS)

    Richards, D. A.; Nita, D. C.; Moseley, G. E.; Hoffmann, D. L.; Standish, C. D.; Smart, P. L.; Edwards, R.

    2013-12-01

    In addition to the many U-Th dated speleothem records (δ18O δ13C, trace elements) of past environmental change based on continuous phases of calcite growth, discontinuous records also provide important constraints for a wide range of past states of the Earth system, including sea levels, permafrost extent, regional aridity and local cave flooding. Chronological information about human activity or faunal evolution can also be obtained where calcite can be seen to overlie cave art or mammalian bones, for example. Among the important considerations when determining the U-Th age of calcite that nucleates on an exposed surface are (1) initial 230Th/232Th, which can be elevated and variable in some settings, and (2) growth rate and sub-sample density, where extrapolation is required. By way of example, we present sea level data based on U-Th ages of vadose speleothems (i.e. formed above the water table and distinct from 'phreatic' examples) from caves of the circum-Caribbean , where calcite growth was interrupted by rising sea levels and then reinitiated after regression. These estimates demand large corrections and derived sea level constraints are compared with alternative data from coral reef terraces, phreatic overgrowths on speleothems or indirect, proxy evidence from oxygen isotopes to constrain rates of ice volume growth. Flowstones from the Bahamas provide useful sea level constraints because they present the longest and most continuous records in such settings (a function of preservation potential in addition to hydrological routing) and also earliest growth post-emergence after sea level fall. We revisit estimates for sea level regression at the end of MIS 5 at ~ 80 ka (Richards et al, 1994; Lundberg and Ford, 1994) and make corrections for non-Bulk Earth initial Th contamination (230Th/232Th activity ratio > 10), based on isochron analysis of alternative stalagmites from the same settings and recent high resolution analysis. We also present new U-Th ages for

  14. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  15. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  16. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    NASA Astrophysics Data System (ADS)

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-08-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  17. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    USGS Publications Warehouse

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-01-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  18. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  19. Stability of individual loudness functions obtained by magnitude estimation and production

    NASA Technical Reports Server (NTRS)

    Hellman, R. P.

    1981-01-01

    A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.

  20. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  1. Early adolescent adversity inflates threat estimation in females and promotes alcohol use initiation in both sexes.

    PubMed

    Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A

    2018-06-01

    Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Obtaining Cue Rate Estimates for Some Mysticete Species using Existing Data

    DTIC Science & Technology

    2014-09-30

    primary focus is to obtain cue rates for humpback whales (Megaptera novaeangliae) off the California coast and on the PMRF range. To our knowledge, no... humpback whale cue rates have been calculated for these populations. Once a cue rate is estimated for the populations of humpback whales off the...rates for humpback whales on breeding grounds, in addition to average cue rates for other species of mysticete whales . Cue rates of several other

  3. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  4. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  5. Objective estimates based on experimental data and initial and final knowledge

    NASA Technical Reports Server (NTRS)

    Rosenbaum, B. M.

    1972-01-01

    An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.

  6. Estimating initial contaminant mass based on fitting mass-depletion functions to contaminant mass discharge data: Testing method efficacy with SVE operations data

    NASA Astrophysics Data System (ADS)

    Mainhagu, J.; Brusseau, M. L.

    2016-09-01

    The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.

  7. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    USGS Publications Warehouse

    Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.

  8. Incidence of breast cancer and estimates of overdiagnosis after the initiation of a population-based mammography screening program.

    PubMed

    Coldman, Andrew; Phillips, Norm

    2013-07-09

    There has been growing interest in the overdiagnosis of breast cancer as a result of mammography screening. We report incidence rates in British Columbia before and after the initiation of population screening and provide estimates of overdiagnosis. We obtained the numbers of breast cancer diagnoses from the BC Cancer Registry and screening histories from the Screening Mammography Program of BC for women aged 30-89 years between 1970 and 2009. We calculated age-specific rates of invasive breast cancer and ductal carcinoma in situ. We compared these rates by age, calendar period and screening participation. We obtained 2 estimates of overdiagnosis from cumulative cancer rates among women between the ages of 40 and 89 years: the first estimate compared participants with nonparticipants; the second estimate compared observed and predicted population rates. We calculated participation-based estimates of overdiagnosis to be 5.4% for invasive disease alone and 17.3% when ductal carcinoma in situ was included. The corresponding population-based estimates were -0.7% and 6.7%. Participants had higher rates of invasive cancer and ductal carcinoma in situ than nonparticipants but lower rates after screening stopped. Population incidence rates for invasive cancer increased after 1980; by 2009, they had returned to levels similar to those of the 1970s among women under 60 years of age but remained elevated among women 60-79 years old. Rates of ductal carcinoma in situ increased in all age groups. The extent of overdiagnosis of invasive cancer in our study population was modest and primarily occurred among women over the age of 60 years. However, overdiagnosis of ductal carcinoma in situ was elevated for all age groups. The estimation of overdiagnosis from observational data is complex and subject to many influences. The use of mammography screening in older women has an increased risk of overdiagnosis, which should be considered in screening decisions.

  9. Influence of Initial Inclined Surface Crack on Estimated Residual Fatigue Lifetime of Railway Axle

    NASA Astrophysics Data System (ADS)

    Náhlík, Luboš; Pokorný, Pavel; Ševčík, Martin; Hutař, Pavel

    2016-11-01

    Railway axles are subjected to cyclic loading which can lead to fatigue failure. For safe operation of railway axles a damage tolerance approach taking into account a possible defect on railway axle surface is often required. The contribution deals with an estimation of residual fatigue lifetime of railway axle with initial inclined surface crack. 3D numerical model of inclined semi-elliptical surface crack in railway axle was developed and its curved propagation through the axle was simulated by finite element method. Presence of press-fitted wheel in the vicinity of initial crack was taken into account. A typical loading spectrum of railway axle was considered and residual fatigue lifetime was estimated by NASGRO approach. Material properties of typical axle steel EA4T were considered in numerical calculations and lifetime estimation.

  10. Parameter estimation in plasmonic QED

    NASA Astrophysics Data System (ADS)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  11. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  12. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique

  13. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  14. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  15. Cost effectiveness of the Oregon quitline "free patch initiative".

    PubMed

    Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John

    2007-12-01

    We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.

  16. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  17. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  18. Obtaining the Grobner Initialization for the Ground Flash Fraction Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Solakiewicz, R.; Attele, R.; Koshak, W.

    2011-01-01

    At optical wavelengths and from the vantage point of space, the multiple scattering cloud medium obscures one's view and prevents one from easily determining what flashes strike the ground. However, recent investigations have made some progress examining the (easier, but still difficult) problem of estimating the ground flash fraction in a set of N flashes observed from space In the study by Koshak, a Bayesian inversion method was introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function of three variables (one of which is the ground flash fraction) was minimized by a numerical method. This method has formed the basis of a Ground Flash Fraction Retrieval Algorithm (GoFFRA) that is being tested as part of GOES-R GLM risk reduction.

  19. Training to estimate blood glucose and to form associations with initial hunger

    PubMed Central

    Ciampolini, Mario; Bianchi, Riccardo

    2006-01-01

    Background The will to eat is a decision associated with conditioned responses and with unconditioned body sensations that reflect changes in metabolic biomarkers. Here, we investigate whether this decision can be delayed until blood glucose is allowed to fall to low levels, when presumably feeding behavior is mostly unconditioned. Following such an eating pattern might avoid some of the metabolic risk factors that are associated with high glycemia. Results In this 7-week study, patients were trained to estimate their blood glucose at meal times by associating feelings of hunger with glycemic levels determined by standard blood glucose monitors and to eat only when glycemia was < 85 mg/dL. At the end of the 7-week training period, estimated and measured glycemic values were found to be linearly correlated in the trained group (r = 0.82; p = 0.0001) but not in the control (untrained) group (r = 0.10; p = 0.40). Fewer subjects in the trained group were hungry than those in the control group (p = 0.001). The 18 hungry subjects of the trained group had significantly lower glucose levels (80.1 ± 6.3 mg/dL) than the 42 hungry control subjects (89.2 ± 10.2 mg/dL; p = 0.01). Moreover, the trained hungry subjects estimated their glycemia (78.1 ± 6.7 mg/dL; estimation error: 3.2 ± 2.4% of the measured glycemia) more accurately than the control hungry subjects (75.9 ± 9.8 mg/dL; estimation error: 16.7 ± 11.0%; p = 0.0001). Also the estimation error of the entire trained group (4.7 ± 3.6%) was significantly lower than that of the control group (17.1 ± 11.5%; p = 0.0001). A value of glycemia at initial feelings of hunger was provisionally identified as 87 mg/dL. Below this level, estimation showed lower error in both trained (p = 0.04) and control subjects (p = 0.001). Conclusion Subjects could be trained to accurately estimate their blood glucose and to recognize their sensations of initial hunger at low glucose concentrations. These results suggest that it is possible

  20. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  1. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  2. Urinary cadmium and estimated dietary cadmium in the Women's Health Initiative.

    PubMed

    Quraishi, Sabah M; Adams, Scott V; Shafer, Martin; Meliker, Jaymie R; Li, Wenjun; Luo, Juhua; Neuhouser, Marian L; Newcomb, Polly A

    2016-01-01

    Cadmium, a heavy metal dispersed in the environment as a result of industrial and agricultural applications, has been implicated in several human diseases including renal disease, cancers, and compromised bone health. In the general population, the predominant sources of cadmium exposure are tobacco and diet. Urinary cadmium (uCd) reflects long-term exposure and has been frequently used to assess cadmium exposure in epidemiological studies; estimated dietary intake of cadmium (dCd) has also been used in several studies. The validity of dCd in comparison with uCd is unclear. This study aimed to compare dCd, estimated from food frequency questionnaires, to uCd measured in spot urine samples from 1,002 participants of the Women's Health Initiative. Using linear regression, we found that dCd was not statistically significantly associated with uCd (β=0.006, P-value=0.14). When stratified by smoking status, dCd was not significantly associated with uCd both in never smokers (β=0.006, P-value=0.09) and in ever smokers (β=0.003, P-value=0.67). Our results suggest that because of the lack of association between estimated dCd and measured uCd, dietary estimation of cadmium exposure should be used with caution in epidemiologic studies.

  3. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  4. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less

  5. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  6. The Robustness of Designs for Trials with Nested Data against Incorrect Initial Intracluster Correlation Coefficient Estimates

    ERIC Educational Resources Information Center

    Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.

    2010-01-01

    In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…

  7. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  8. 42 CFR 433.113 - Reduction of FFP for failure to operate a system and obtain initial approval.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Reduction of FFP for failure to operate a system... ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.113 Reduction of FFP for failure to operate a system and obtain initial approval. (a) Except as waived under § 433.130 or 433.131...

  9. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  10. Challenges in Obtaining Estimates of the Risk of Tuberculosis Infection During Overseas Deployment.

    PubMed

    Mancuso, James D; Geurts, Mia

    2015-12-01

    Estimates of the risk of tuberculosis (TB) infection resulting from overseas deployment among U.S. military service members have varied widely, and have been plagued by methodological problems. The purpose of this study was to estimate the incidence of TB infection in the U.S. military resulting from deployment. Three populations were examined: 1) a unit of 2,228 soldiers redeploying from Iraq in 2008, 2) a cohort of 1,978 soldiers followed up over 5 years after basic training at Fort Jackson in 2009, and 3) 6,062 participants in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The risk of TB infection in the deployed population was low-0.6% (95% confidence interval [CI]: 0.1-2.3%)-and was similar to the non-deployed population. The prevalence of latent TB infection (LTBI) in the U.S. population was not significantly different among deployed and non-deployed veterans and those with no military service. The limitations of these retrospective studies highlight the challenge in obtaining valid estimates of risk using retrospective data and the need for a more definitive study. Similar to civilian long-term travelers, risks for TB infection during deployment are focal in nature, and testing should be targeted to only those at increased risk. © The American Society of Tropical Medicine and Hygiene.

  11. Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.

    PubMed

    King, Leandra; Wakeley, John

    2016-09-01

    We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.

  12. Demonstration of precise estimation of polar motion parameters with the global positioning system: Initial results

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1991-01-01

    Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.

  13. Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis

    PubMed Central

    Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel

    2012-01-01

    This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867

  14. The first step toward genetic selection for host tolerance to infectious pathogens: obtaining the tolerance phenotype through group estimates

    PubMed Central

    Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias

    2012-01-01

    Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID

  15. Parent-child communication and marijuana initiation: evidence using discrete-time survival analysis.

    PubMed

    Nonnemaker, James M; Silber-Ashley, Olivia; Farrelly, Matthew C; Dench, Daniel

    2012-12-01

    This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or - in the case of youth reports of communication - potentially harmful (leading to increased likelihood of marijuana initiation). Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  17. Estimates of the solar internal angular velocity obtained with the Mt. Wilson 60-foot solar tower

    NASA Technical Reports Server (NTRS)

    Rhodes, Edward J., Jr.; Cacciani, Alessandro; Woodard, Martin; Tomczyk, Steven; Korzennik, Sylvain

    1987-01-01

    Estimates are obtained of the solar internal angular velocity from measurements of the frequency splittings of p-mode oscillations. A 16-day time series of full-disk Dopplergrams obtained during July and August 1984 at the 60-foot tower telescope of the Mt. Wilson Observatory is analyzed. Power spectra were computed for all of the zonal, tesseral, and sectoral p-modes from l = 0 to 89 and for all of the sectoral p-modes from l = 90 to 200. A mean power spectrum was calculated for each degree up to 89. The frequency differences of all of the different nonzonal modes were calculated for these mean power spectra.

  18. Urinary Cadmium and Estimated Dietary Cadmium in the Women’s Health Initiative

    PubMed Central

    Quraishi, Sabah M.; Adams, Scott V.; Shafer, Martin; Meliker, Jaymie R.; Li, Wenjun; Luo, Juhua; Neuhouser, Marian L.; Newcomb, Polly A.

    2016-01-01

    Cadmium, a heavy metal dispersed in the environment as a result of industrial and agricultural applications, has been implicated in several human diseases including renal disease, cancers, and compromised bone health. In the general population, the predominant sources of cadmium exposure are tobacco and diet. Urinary cadmium (uCd) reflects long-term exposure and has been frequently used to assess cadmium exposure in epidemiological studies; estimated dietary intake of cadmium (dCd) has also been used in several studies. The validity of dCd in comparison to uCd is unclear. This study aimed to compare dCd, estimated from food frequency questionnaires (FFQs), to uCd measured in spot urine samples from 1,002 participants of the Women’s Health Initiative. Using linear regression, we found that dCd was not statistically significantly associated with uCd (β=0.006, p-value=0.14). When stratified by smoking status, dCd was not significantly associated with uCd both in never smokers (β=0.006, p-value=0.09) and in ever smokers (β=0.003, p-value=0.0.67). Our results suggest that because of the lack of association between estimated dietary cadmium and measured urinary cadmium exposure, dietary estimation of cadmium exposure should be used with caution in epidemiologic studies. PMID:26015077

  19. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  20. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  1. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume

  2. Estimate of Shock-Hugoniot Adiabat of Liquids from Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bouton, E.; Vidal, P.

    2007-12-01

    Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperatures ranging from 250 K to 360 K.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  4. Age of smoking initiation among adolescents in Africa.

    PubMed

    Veeranki, Sreenivas P; John, Rijo M; Ibrahim, Abdallah; Pillendla, Divya; Thrasher, James F; Owusu, Daniel; Ouma, Ahmed E O; Mamudu, Hadii M

    2017-01-01

    To estimate prevalence and identify correlates of age of smoking initiation among adolescents in Africa. Data (n = 16,519) were obtained from nationally representative Global Youth Tobacco Surveys in nine West African countries. Study outcome was adolescents' age of smoking initiation categorized into six groups: ≤7, 8 or 9, 10 or 11, 12 or 13, 14 or 15 and never-smoker. Explanatory variables included sex, parental or peer smoking behavior, exposure to tobacco industry promotions, and knowledge about smoking harm. Weighted multinomial logit models were conducted to determine correlates associated with adolescents' age of smoking initiation. Age of smoking initiation was as early as ≤7 years; prevalence estimates ranged from 0.7 % in Ghana at 10 or 11 years age to 9.6 % in Cote d'Ivoire at 12 or 13 years age. Males, exposures to parental or peer smoking, and industry promotions were identified as significant correlates. West African policymakers should adopt a preventive approach consistent with the World Health Organization Framework Convention on Tobacco Control to prevent an adolescent from initiating smoking and developing into future regular smokers.

  5. A computer procedure to analyze seismic data to estimate outcome probabilities in oil exploration, with an initial application in the tabasco region of southeastern Mexico

    NASA Astrophysics Data System (ADS)

    Berlanga, Juan M.; Harbaugh, John W.

    The Tabasco region contains a number of major oilfields, including some of the emerging "giant" oil fields which have received extensive publicity. Fields in the Tabasco region are associated with large geologic structures which are detected readily by seismic surveys. The structures seem to be associated with deepseated movement of salt, and they are complexly faulted. Some structures have as much as 1000 milliseconds relief of seismic lines. A study, interpreting the structure of the area, used initially only a fraction of the total seismic lines That part of Tabasco region that has been studied was surveyed with a close-spaced rectilinear network of seismic lines. A, interpreting the structure of the area, used initially only a fraction of the total seismic data available. The purpose was to compare "predictions" of reflection time based on widely spaced seismic lines, with "results" obtained along more closely spaced lines. This process of comparison simulates the sequence of events in which a reconnaissance network of seismic lines is used to guide a succession of progressively more closely spaced lines. A square gridwork was established with lines spaced at 10 km intervals, and using machine contour maps, compared the results with those obtained with seismic grids employing spacings of 5 and 2.5 km respectively. The comparisons of predictions based on widely spaced lines with observations along closely spaced lines provide information by which an error function can be established. The error at any point can be defined as the difference between the predicted value for that point, and the subsequently observed value at that point. Residuals obtained by fitting third-degree polynomial trend surfaces were used for comparison. The root mean square of the error measurement, (expressed in seconds or milliseconds reflection time) was found to increase more or less linearly with distance from the nearest seismic point. Oil-occurrence probabilities were established on

  6. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  7. Estimate of shock-Hugoniot adiabat of liquids from hydrodyamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouton, E.; Vidal, P.

    2007-12-12

    Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperaturesmore » ranging from 250 K to 360 K.« less

  8. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  9. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    NASA Astrophysics Data System (ADS)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  10. Methods of Estimating Initial Crater Depths on Icy Satellites using Stereo Topography

    NASA Astrophysics Data System (ADS)

    Persaud, D. M.; Phillips, C. B.

    2014-12-01

    Stereo topography, combined with models of viscous relaxation of impact craters, allows for the study of the rheology and thermal history of icy satellites. An important step in calculating relaxation of craters is determining the initial depths of craters before viscous relaxation. Two methods for estimating initial crater depths on the icy satellites of Saturn have been previously discussed. White and Schenk (2013) present the craters of Iapetus as relatively unrelaxed in modeling the relaxation of craters of Rhea. Phillips et al. (2013) assume that Herschel crater on Saturn's satellite Mimas is unrelaxed in relaxation calculations and models of Rhea and Dione. In the second method, the depth of Herschel crater is scaled based on the different crater diameters and the difference in surface gravity on the large moons to predict the initial crater depths for Rhea and Dione. In the first method, since Iapetus is of similar size to Dione and Rhea, no gravity scaling is necessary; craters of similar size on Iapetus were chosen and their depths measured to determine the appropriate initial crater depths for Rhea. We test these methods by first extracting topographic profiles of impact craters on Iapetus from digital elevation models (DEMs) constructed from stereo images from the Cassini ISS instrument. We determined depths from these profiles and used them to calculate initial crater depths and relaxation percentages for Rhea and Dione craters using the methods described above. We first assumed that craters on Iapetus were relaxed, and compared the results to previously calculated relaxation percentages for Rhea and Dione relative to Herschel crater (with appropriate scaling for gravity and crater diameter). We then tested the assumption that craters on Iapetus were unrelaxed and used our new measurements of crater depth to determine relaxation percentages for Dione and Rhea. We will present results and conclusions from both methods and discuss their efficacy for

  11. Montana rest area usage : data acquisition and usage estimation.

    DOT National Transportation Integrated Search

    2011-02-01

    The Montana Department of Transportation (MDT) has initiated research to refine the figures employed in the : estimation of Montana rest area use. This work seeks to obtain Montana-specific data related to rest area usage, : including water flow, eff...

  12. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  13. Obtaining Parts

    Science.gov Websites

    The Cosmic Connection Parts for the Berkeley Detector Suppliers: Scintillator Eljen Technology 1 obtain the components needed to build the Berkeley Detector. These companies have helped previous the last update. He estimates that the cost to build a detector varies from $1500 to $2700 depending

  14. Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi

    2017-11-01

    In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  16. [Radiance Simulation of BUV Hyperspectral Sensor on Multi Angle Observation, and Improvement to Initial Total Ozone Estimating Model of TOMS V8 Total Ozone Algorithm].

    PubMed

    Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun

    2015-11-01

    New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting

  17. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    PubMed

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  18. Obtaining Crack-free WC-Co Alloys by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Khmyrov, R. S.; Safronov, V. A.; Gusarov, A. V.

    Standard hardmetals of WC-Co system are brittle and often crack at selective laser melting (SLM). The objective of this study is to estimate the range of WC/Co ratio where cracking can be avoided. Micron-sized Co powder was mixed with WC nanopowder in a ball mill to obtain uniform distribution of WC over the surface of Co particles. Continuous layers of remelted material on the surface of a hardmetal plate were obtained from this composite powder by SLM at 1.07μm wavelength. The layers have satisfactory porosity and are well bound to the substrate. The chemical composition of the layers matches the composition of the initial powder mixtures. The powder mixture with 25wt.%WC can be used for SLM to obtain materials without cracks. The powder mixture with 50wt.%WC cracks because of formation of brittle W3Co3C phase. Cracking can considerably reduce the mechanical strength, so that the use of this composition is not advised.

  19. Estimating the gravitational-wave content of initial-data sets for numerical relativity using the Beetle--Burko scalar

    NASA Astrophysics Data System (ADS)

    Burko, Lior M.

    2006-04-01

    The Beetle--Burko radiation scalar is a gauge independent, tetrad independent, and background independent quantity that depends only on the radiative degrees of freedom where the notion of radiation is incontrovertible, and can be computed from spatial data as is typical in numerical relativity simulations even for strongly dynamical spacetimes. We show that the Beetle--Burko radiation scalar can be used for estimating the graviational-wave content of initial-data sets in numerical relativity, and can thus be useful for the construction of physically meaningful ones, and the identification of ``junk'' data on the intial value surface. We apply this method for the case of a momentarily stationary black hole binary, and demonstrate how the Beetle-- Burko scalar distinguishes between Misner and Brill--Lindquist initial data. The method, however, is robust, and is applicable to generic initial data sets. In addition to initial data sets, the Beetle--Burko radiation scalar is equally applicable also for evolution data.

  20. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  1. Greenhouse gases inventory and carbon balance of two dairy systems obtained from two methane-estimation methods.

    PubMed

    Cunha, C S; Lopes, N L; Veloso, C M; Jacovine, L A G; Tomich, T R; Pereira, L G R; Marcondes, M I

    2016-11-15

    The adoption of carbon inventories for dairy farms in tropical countries based on models developed from animals and diets of temperate climates is questionable. Thus, the objectives of this study were to estimate enteric methane (CH4) emissions through the SF6 tracer gas technique and through equations proposed by the Intergovernmental Panel on Climate Change (IPCC) Tier 2 and to calculate the inventory of greenhouse gas (GHG) emissions from two dairy systems. In addition, the carbon balance of these properties was estimated using enteric CH4 emissions obtained using both methodologies. In trial 1, the CH4 emissions were estimated from seven Holstein dairy cattle categories based on the SF6 tracer gas technique and on IPCC equations. The categories used in the study were prepubertal heifers (n=6); pubertal heifers (n=4); pregnant heifers (n=5); high-producing (n=6); medium-producing (n=5); low-producing (n=4) and dry cows (n=5). Enteric methane emission was higher for the category comprising prepubertal heifers when estimated by the equations proposed by the IPCC Tier 2. However, higher CH4 emissions were estimated by the SF6 technique in the categories including medium- and high-producing cows and dry cows. Pubertal heifers, pregnant heifers, and low-producing cows had equal CH4 emissions as estimated by both methods. In trial 2, two dairy farms were monitored for one year to identify all activities that contributed in any way to GHG emissions. The total emission from Farm 1 was 3.21t CO2e/animal/yr, of which 1.63t corresponded to enteric CH4. Farm 2 emitted 3.18t CO2e/animal/yr, with 1.70t of enteric CH4. IPCC estimations can underestimate CH4 emissions from some categories while overestimate others. However, considering the whole property, these discrepancies are offset and we would submit that the equations suggested by the IPCC properly estimate the total CH4 emission and carbon balance of the properties. Thus, the IPCC equations should be utilized with

  2. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  3. Estimating the Impact of Earlier ART Initiation and Increased Testing Coverage on HIV Transmission among Men Who Have Sex with Men in Mexico using a Mathematical Model.

    PubMed

    Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan

    2015-01-01

    To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.

  4. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  5. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  6. Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)

    1980-01-01

    The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.

  7. Estimated intensity of the EMP from lightning discharges necessary for elves initiation based on balloon experiment

    NASA Astrophysics Data System (ADS)

    Kondo, S.; Yoshida, A.; Takahashi, Y.; Chikada, S.; Adachi, T.; Sakanoi, T.

    2007-12-01

    Transient optical phenomena in the mesosphere and lower ionosphere called transient luminous events (TLEs) have been investigated extensively since the first discovery in 1989. In the lower ionosphere, elves are generated by the electromagnetic pulses (EMPs) radiated from the intense lightning current. On the ground-based observation, cameras can not always identify the occurrence of elves because elves emission is sometimes reduced significantly by the atmosphere and blocked by clouds. Therefore, it has been difficult to determine the threshold of intensity of EMPs necessary for initiation of elves. We simultaneously carried out optical and sferics measurements for TLEs and lightning discharges using a high altitude balloon launched at Sanriku Balloon Center on the night of August 25 / 26 in 2006. We fixed four CCD cameras on the gondola, each of which had horizontal FOV of ~100 degree. They cover 360 degree in horizontal direction and imaged the TLEs without atmospheric extinction nor blocking by clouds. The frame rate is 30 fps. We installed three dipole antennas at the gondola, which received the vertical and horizontal electric fields radiated from lightning discharges. The frequency range of the VLF receiver is 1-25 kHz. We also make use of VLF sferics data obtained by ground-based antennas located at Tohoku University in Sendai. We picked up six elves from the image data set obtained by the CCD cameras, and examined the maximum amplitudes of the vertical electric field for 22 lightning discharge events including the six elves events observed both at the balloon and at Sendai. It is found that the maximum amplitudes of the vertical electric field in the five elves events are much larger than those in the other lightning events. We estimate the intensity of the radiated electric field necessary for elves. About one elves event, we don't see intense vertical electric field in the balloon data.

  8. Estimating discharge in rivers using remotely sensed hydraulic information

    USGS Publications Warehouse

    Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.

    2005-01-01

    A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.

  9. Estimation of the Standardized Risk Difference and Ratio in a Competing Risks Framework: Application to Injection Drug Use and Progression to AIDS After Initiation of Antiretroviral Therapy

    PubMed Central

    Cole, Stephen R.; Lau, Bryan; Eron, Joseph J.; Brookhart, M. Alan; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.; Cole, Stephen R.; Brookhart, M. Alan; Lau, Bryan; Eron, Joseph J.; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.

    2015-01-01

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. PMID:24966220

  10. Comparison of Species Richness Estimates Obtained Using Nearly Complete Fragments and Simulated Pyrosequencing-Generated Fragments in 16S rRNA Gene-Based Environmental Surveys▿ †

    PubMed Central

    Youssef, Noha; Sheik, Cody S.; Krumholz, Lee R.; Najar, Fares Z.; Roe, Bruce A.; Elshahed, Mostafa S.

    2009-01-01

    Pyrosequencing-based 16S rRNA gene surveys are increasingly utilized to study highly diverse bacterial communities, with special emphasis on utilizing the large number of sequences obtained (tens to hundreds of thousands) for species richness estimation. However, it is not yet clear how the number of operational taxonomic units (OTUs) and, hence, species richness estimates determined using shorter fragments at different taxonomic cutoffs correlates with the number of OTUs assigned using longer, nearly complete 16S rRNA gene fragments. We constructed a 16S rRNA clone library from an undisturbed tallgrass prairie soil (1,132 clones) and used it to compare species richness estimates obtained using eight pyrosequencing candidate fragments (99 to 361 bp in length) and the nearly full-length fragment. Fragments encompassing the V1 and V2 (V1+V2) region and the V6 region (generated using primer pairs 8F-338R and 967F-1046R) overestimated species richness; fragments encompassing the V3, V7, and V7+V8 hypervariable regions (generated using primer pairs 338F-530R, 1046F-1220R, and 1046F-1392R) underestimated species richness; and fragments encompassing the V4, V5+V6, and V6+V7 regions (generated using primer pairs 530F-805R, 805F-1046R, and 967F-1220R) provided estimates comparable to those obtained with the nearly full-length fragment. These patterns were observed regardless of the alignment method utilized or the parameter used to gauge comparative levels of species richness (number of OTUs observed, slope of scatter plots of pairwise distance values for short and nearly complete fragments, and nonparametric and parametric species richness estimates). Similar results were obtained when analyzing three other datasets derived from soil, adult Zebrafish gut, and basaltic formations in the East Pacific Rise. Regression analysis indicated that these observed discrepancies in species richness estimates within various regions could readily be explained by the proportions of

  11. Estimation of brittleness indices for pay zone determination in a shale-gas reservoir by using elastic properties obtained from micromechanics

    NASA Astrophysics Data System (ADS)

    Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime

    2018-04-01

    The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.

  12. Harmonization of initial estimates of shale gas life cycle greenhouse gas emissions for electric power generation.

    PubMed

    Heath, Garvin A; O'Donoughue, Patrick; Arent, Douglas J; Bazilian, Morgan

    2014-08-05

    Recent technological advances in the recovery of unconventional natural gas, particularly shale gas, have served to dramatically increase domestic production and reserve estimates for the United States and internationally. This trend has led to lowered prices and increased scrutiny on production practices. Questions have been raised as to how greenhouse gas (GHG) emissions from the life cycle of shale gas production and use compares with that of conventionally produced natural gas or other fuel sources such as coal. Recent literature has come to different conclusions on this point, largely due to differing assumptions, comparison baselines, and system boundaries. Through a meta-analytical procedure we call harmonization, we develop robust, analytically consistent, and updated comparisons of estimates of life cycle GHG emissions for electricity produced from shale gas, conventionally produced natural gas, and coal. On a per-unit electrical output basis, harmonization reveals that median estimates of GHG emissions from shale gas-generated electricity are similar to those for conventional natural gas, with both approximately half that of the central tendency of coal. Sensitivity analysis on the harmonized estimates indicates that assumptions regarding liquids unloading and estimated ultimate recovery (EUR) of wells have the greatest influence on life cycle GHG emissions, whereby shale gas life cycle GHG emissions could approach the range of best-performing coal-fired generation under certain scenarios. Despite clarification of published estimates through harmonization, these initial assessments should be confirmed through methane emissions measurements at components and in the atmosphere and through better characterization of EUR and practices.

  13. Harmonization of initial estimates of shale gas life cycle greenhouse gas emissions for electric power generation

    PubMed Central

    Heath, Garvin A.; O’Donoughue, Patrick; Arent, Douglas J.; Bazilian, Morgan

    2014-01-01

    Recent technological advances in the recovery of unconventional natural gas, particularly shale gas, have served to dramatically increase domestic production and reserve estimates for the United States and internationally. This trend has led to lowered prices and increased scrutiny on production practices. Questions have been raised as to how greenhouse gas (GHG) emissions from the life cycle of shale gas production and use compares with that of conventionally produced natural gas or other fuel sources such as coal. Recent literature has come to different conclusions on this point, largely due to differing assumptions, comparison baselines, and system boundaries. Through a meta-analytical procedure we call harmonization, we develop robust, analytically consistent, and updated comparisons of estimates of life cycle GHG emissions for electricity produced from shale gas, conventionally produced natural gas, and coal. On a per-unit electrical output basis, harmonization reveals that median estimates of GHG emissions from shale gas-generated electricity are similar to those for conventional natural gas, with both approximately half that of the central tendency of coal. Sensitivity analysis on the harmonized estimates indicates that assumptions regarding liquids unloading and estimated ultimate recovery (EUR) of wells have the greatest influence on life cycle GHG emissions, whereby shale gas life cycle GHG emissions could approach the range of best-performing coal-fired generation under certain scenarios. Despite clarification of published estimates through harmonization, these initial assessments should be confirmed through methane emissions measurements at components and in the atmosphere and through better characterization of EUR and practices. PMID:25049378

  14. Automatic estimation of voice onset time for word-initial stops by applying random forest to onset detection.

    PubMed

    Lin, Chi-Yueh; Wang, Hsiao-Chuan

    2011-07-01

    The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America

  15. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  16. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  17. Forensic individual age estimation with DNA: From initial approaches to methylation tests.

    PubMed

    Freire-Aradas, A; Phillips, C; Lareu, M V

    2017-07-01

    Individual age estimation is a key factor in forensic science analysis that can provide very useful information applicable to criminal, legal, and anthropological investigations. Forensic age inference was initially based on morphological inspection or radiography and only later began to adopt molecular approaches. However, a lack of accuracy or technical problems hampered the introduction of these DNA-based methodologies in casework analysis. A turning point occurred when the epigenetic signature of DNA methylation was observed to gradually change during an individual´s lifespan. In the last four years, the number of publications reporting DNA methylation age-correlated changes has gradually risen and the forensic community now has a range of age methylation tests applicable to forensic casework. Most forensic age predictor models have been developed based on blood DNA samples, but additional tissues are now also being explored. This review assesses the most widely adopted genes harboring methylation sites, detection technologies, statistical age-predictive analyses, and potential causes of variation in age estimates. Despite the need for further work to improve predictive accuracy and establishing a broader range of tissues for which tests can analyze the most appropriate methylation sites, several forensic age predictors have now been reported that provide consistency in their prediction accuracies (predictive error of ±4 years); this makes them compelling tools with the potential to contribute key information to help guide criminal investigations. Copyright © 2017 Central Police University.

  18. Application of biological simulation models in estimating feed efficiency of finishing steers.

    PubMed

    Williams, C B

    2010-07-01

    Data on individual daily feed intake, BW at 28-d intervals, and carcass composition were obtained on 1,212 crossbred steers. Within-animal regressions of cumulative feed intake and BW on linear and quadratic days on feed were used to quantify initial and ending BW, average daily observed feed intake (OFI), and ADG over a 120-d finishing period. Feed intake was predicted (PFI) with 3 biological simulation models (BSM): a) Decision Evaluator for the Cattle Industry, b) Cornell Value Discovery System, and c) NRC update 2000, using observed growth and carcass data as input. Residual feed intake (RFI) was estimated using OFI (RFI(EL)) in a linear statistical model (LSM), and feed conversion ratio (FCR) was estimated as OFI/ADG (FCR(E)). Output from the BSM was used to estimate RFI by using PFI in place of OFI with the same LSM, and FCR was estimated as PFI/ADG. These estimates were evaluated against RFI(EL) and FCR(E). In a second analysis, estimates of RFI were obtained for the 3 BSM as the difference between OFI and PFI, and these estimates were evaluated against RFI(EL). The residual variation was extremely small when PFI was used in the LSM to estimate RFI, and this was mainly due to the fact that the same input variables (initial BW, days on feed, and ADG) were used in the BSM and LSM. Hence, the use of PFI obtained with BSM as a replacement for OFI in a LSM to characterize individual animals for RFI was not feasible. This conclusion was also supported by weak correlations (<0.4) between RFI(EL) and RFI obtained with PFI in the LSM, and very weak correlations (<0.13) between RFI(EL) and FCR obtained with PFI. In the second analysis, correlations (>0.89) for RFI(EL) with the other RFI estimates suggest little difference between RFI(EL) and any of these RFI estimates. In addition, results suggest that the RFI estimates calculated with PFI would be better able to identify animals with low OFI and small ADG as inefficient compared with RFI(EL). These results may be due

  19. The initiation of boiling during pressure transients. [water boiling on metal surfaces

    NASA Technical Reports Server (NTRS)

    Weisman, J.; Bussell, G.; Jashnani, I. L.; Hsieh, T.

    1973-01-01

    The initiation of boiling of water on metal surfaces during pressure transients has been investigated. The data were obtained by a new technique in which light beam fluctuations and a pressure signal were simultaneously recorded on a dual beam oscilloscope. The results obtained agreed with those obtained using high speed photography. It was found that, for water temperatures between 90-150 C, the wall superheat required to initiate boiling during a rapid pressure transient was significantly higher than required when the pressure was slowly reduced. This result is explained by assuming that a finite time is necessary for vapor to fill the cavity at which the bubble originates. Experimental measurements of this time are in reasonably good agreement with calculations based on the proposed theory. The theory includes a new procedure for estimating the coefficient of vaporization.

  20. Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data

    NASA Technical Reports Server (NTRS)

    Vanderesch, A. H.

    1972-01-01

    Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.

  1. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  2. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  3. Estimation of the standardized risk difference and ratio in a competing risks framework: application to injection drug use and progression to AIDS after initiation of antiretroviral therapy.

    PubMed

    Cole, Stephen R; Lau, Bryan; Eron, Joseph J; Brookhart, M Alan; Kitahata, Mari M; Martin, Jeffrey N; Mathews, William C; Mugavero, Michael J

    2015-02-15

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Prioritizing Scientific Initiatives.

    ERIC Educational Resources Information Center

    Bahcall, John N.

    1991-01-01

    Discussed is the way in which a limited number of astronomy research initiatives were chosen and prioritized based on a consensus of members from the Astronomy and Astrophysics Survey Committee. A list of recommended equipment initiatives and estimated costs is provided. (KR)

  5. Estimating Evaporative Fraction From Readily Obtainable Variables in Mangrove Forests of the Everglades, U.S.A.

    NASA Technical Reports Server (NTRS)

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John; Barr, Jordan

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) the ratio of latent heat (LE; energy equivalent of evapotranspiration -ET-) to total available energy from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micro-meteorological and flux tower observations, or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature [T(sub s)] normalized difference vegetation index (NDVI)and daily maximum air temperature [T(sub a)]. The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using T(sub s) and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the T(sub s) from Landsat relative to the T(sub s) from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  6. Estimating evaporative fraction from readily obtainable variables in mangrove forests of the Everglades, U.S.A.

    USGS Publications Warehouse

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  7. Estimating phonation threshold pressure.

    PubMed

    Fisher, K V; Swank, P R

    1997-10-01

    Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.

  8. Saturn’s Ring Rain: Initial Estimates of Ring Mass Loss Rates

    NASA Astrophysics Data System (ADS)

    Moore, Luke; O'Donoghue, J.; Mueller-Wodarg, I.; Mendillo, M.

    2013-10-01

    We estimate rates of mass loss from Saturn’s rings based on ionospheric model reproductions of derived H3+ column densities. On 17 April 2011 over two hours of near-infrared spectral data were obtained of Saturn using the Near InfraRed Spectrograph (NIRSPEC) instrument on the 10-m Keck II telescope. The intensity of two bright H3+ rotational-vibrational emission lines was visible from nearly pole to pole, allowing low-latitude ionospheric emissions to be studied for the first time, and revealing significant latitudinal structure, with local extrema in one hemisphere being mirrored at magnetically conjugate latitudes in the opposite hemisphere. Even more striking, those minima and maxima mapped to latitudes of increased or increased density in Saturn’s rings, implying a direct ring-atmosphere connection in which charged water group particles from the rings are guided by magnetic field lines as they “rain” down upon the atmosphere. Water products act to quench the local ionosphere, and therefore modify the observed H3+ densities. Using the Saturn Thermosphere Ionosphere Model (STIM), a 3-D model of Saturn’s upper atmosphere, we derive the rates of water influx required from the rings in order to reproduce the observed H3+ column densities. As a unique pair of conjugate latitudes map to a specific radial distance in the ring plane, the derived water influxes can equivalently be described as rates of ring mass erosion as a function of radial distance in the ring plane, and therefore also allow for an improved estimate of the lifetime of Saturn’s rings.

  9. Small sample estimation of the reliability function for technical products

    NASA Astrophysics Data System (ADS)

    Lyamets, L. L.; Yakimenko, I. V.; Kanishchev, O. A.; Bliznyuk, O. A.

    2017-12-01

    It is demonstrated that, in the absence of big statistic samples obtained as a result of testing complex technical products for failure, statistic estimation of the reliability function of initial elements can be made by the moments method. A formal description of the moments method is given and its advantages in the analysis of small censored samples are discussed. A modified algorithm is proposed for the implementation of the moments method with the use of only the moments at which the failures of initial elements occur.

  10. Prognosis estimation under the light of metabolic tumor parameters on initial FDG-PET/CT in patients with primary extranodal lymphoma

    PubMed Central

    Okuyucu, Kursat; Ozaydın, Sukru; Alagoz, Engin; Ozgur, Gokhan; Oysul, Fahrettin Guven; Ozmen, Ozlem; Tuncel, Murat; Ozturk, Mustafa; Arslan, Nuri

    2016-01-01

    Abstract Background Non-Hodgkin’s lymphomas arising from the tissues other than primary lymphatic organs are named primary extranodal lymphoma. Most of the studies evaluated metabolic tumor parameters in different organs and histopathologic variants of this disease generally for treatment response. We aimed to evaluate the prognostic value of metabolic tumor parameters derived from initial FDG-PET/CT in patients with a medley of primary extranodal lymphoma in this study. Patients and methods There were 67 patients with primary extranodal lymphoma for whom FDG-PET/CT was requested for primary staging. Quantitative PET/CT parameters: maximum standardized uptake value (SUVmax), average standardized uptake value (SUVmean), metabolic tumor volume (MTV) and total lesion glycolysis (TLG) were used to estimate disease-free survival and overall survival. Results SUVmean, MTV and TLG were found statistically significant after multivariate analysis. SUVmean remained significant after ROC curve analysis. Sensitivity and specificity were calculated as 88% and 64%, respectively, when the cut-off value of SUVmean was chosen as 5.15. After the investigation of primary presentation sites and histo-pathological variants according to recurrence, there is no difference amongst the variants. Primary site of extranodal lymphomas however, is statistically important (p = 0.014). Testis and central nervous system lymphomas have higher recurrence rate (62.5%, 73%, respectively). Conclusions High SUVmean, MTV and TLG values obtained from primary staging FDG-PET/CT are potential risk factors for both disease-free survival and overall survival in primary extranodal lymphoma. SUVmean is the most significant one amongst them for estimating recurrence/metastasis. PMID:27904443

  11. Did Better Colleges Bring Better Jobs? Estimating the Effects of College Quality on Initial Employment for College Graduates in China

    ERIC Educational Resources Information Center

    Yu, Li

    2017-01-01

    The unemployment problem of college students in China has drawn much attention from academics and society. Using the 2011 College Student Labor Market (CSLM) survey data from Tsinghua University, this paper estimated the effects of college quality on initial employment, including employment status and employment unit ownership for fresh college…

  12. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  13. Decoding tactile afferent activity to obtain an estimate of instantaneous force and torque applied to the fingerpad

    PubMed Central

    Birznieks, Ingvars; Redmond, Stephen J.

    2015-01-01

    Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866

  14. The application of parameter estimation to flight measurements to obtain lateral-directional stability derivatives of an augmented jet-flap STOL airplane

    NASA Technical Reports Server (NTRS)

    Stephenson, J. D.

    1983-01-01

    Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.

  15. Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.

    PubMed

    Hui, Zhuo; Sankaranarayanan, Aswin C

    2017-10-01

    This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.

  16. Sorption and desorption of carbamazepine, naproxen and triclosan in a soil irrigated with raw wastewater: estimation of the sorption parameters by considering the initial mass of the compounds in the soil.

    PubMed

    Durán-Álvarez, Juan C; Prado-Pano, Blanca; Jiménez-Cisneros, Blanca

    2012-06-01

    In conventional sorption studies, the prior presence of contaminants in the soil is not considered when estimating the sorption parameters because this is only a transient state. However, this parameter should be considered in order to avoid the under/overestimation of the soil sorption capacity. In this study, the sorption of naproxen, carbamazepine and triclosan was determined in a wastewater irrigated soil, considering the initial mass of the compounds. Batch sorption-desorption tests were carried out at two soil depths (0-10 cm and 30-40 cm), using either 10 mM CaCl(2) solution or untreated wastewater as the liquid phase. Data were satisfactorily fitted to the initial mass model. For the two soils, release of naproxen and carbamazepine was observed when the CaCl(2) solution was used, but not in the soil/wastewater system. The compounds' release was higher in the topsoil than in the 30-40 cm soil. Sorption coefficients (K(d)) for CaCl(2) solution tests showed that in the topsoil, triclosan (64.9 L kg(-1)) is sorbed to a higher extent than carbamazepine and naproxen (5.81 and 2.39 L kg(-1), respectively). In the 30-40 cm soil, carbamazepine and naproxen K(d) values (11.4 and 4.41 L kg(-1), respectively) were higher than those obtained for the topsoil, while the triclosan K(d) value was significantly lower than in the topsoil (19.2 L kg(-1)). Differences in K(d) values were found when comparing the results obtained for the two liquid phases. Sorption of naproxen and carbamazepine was reversible for both soils, while sorption of triclosan was found to be irreversible. This study shows the sorption behavior of three pharmaceuticals in a wastewater irrigated soil, as well as the importance of considering the initial mass of target pollutants in the estimation of their sorption parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    PubMed Central

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  18. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  19. Spectral estimates of intercepted solar radiation by corn and soybean canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.

    1982-01-01

    Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.

  20. Shock initiation and detonation properties of bisfluorodinitroethyl formal (FEFO)

    NASA Astrophysics Data System (ADS)

    Gibson, L. L.; Sheffield, S. A.; Dattelbaum, Dana M.; Stahl, David B.

    2012-03-01

    FEFO is a liquid explosive with a density of 1.60 g/cm3 and an energy output similar to that of trinitrotoluene (TNT), making it one of the more energetic liquid explosives. Here we describe shock initiation experiments that were conducted using a two-stage gas gun using magnetic gauges to measure the wave profiles during a shock-to-detonation transition. Unreacted Hugoniot data, time-to detonation (overtake) measurements, and reactive wave profiles were obtained from each experiment. FEFO was found to initiate by the homogeneous initiation model, similar to all other liquid explosives we have studied (nitromethane, isopropyl nitrate, hydrogen peroxide). The new unreacted Hugoniot points agree well with other published data. A universal liquid Hugoniot estimation slightly under predicts the measured Hugoniot data. FEFO is very insensitive, with about the same shock sensitivity as the triamino-trinitro-benzene (TATB)-based explosive PBX9502 and cast TNT.

  1. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. A GRASS GIS module to obtain an estimation of glacier behavior under climate change: A pilot study on Italian glacier

    NASA Astrophysics Data System (ADS)

    Strigaro, Daniele; Moretti, Massimiliano; Mattavelli, Matteo; Frigerio, Ivan; Amicis, Mattia De; Maggi, Valter

    2016-09-01

    The aim of this work is to integrate the Minimal Glacier Model in a Geographic Information System Python module in order to obtain spatial simulations of glacier retreat and to assess the future scenarios with a spatial representation. The Minimal Glacier Models are a simple yet effective way of estimating glacier response to climate fluctuations. This module can be useful for the scientific and glaciological community in order to evaluate glacier behavior, driven by climate forcing. The module, called r.glacio.model, is developed in a GRASS GIS (GRASS Development Team, 2016) environment using Python programming language combined with different libraries as GDAL, OGR, CSV, math, etc. The module is applied and validated on the Rutor glacier, a glacier in the south-western region of the Italian Alps. This glacier is very large in size and features rather regular and lively dynamics. The simulation is calibrated by reconstructing the 3-dimensional dynamics flow line and analyzing the difference between the simulated flow line length variations and the observed glacier fronts coming from ortophotos and DEMs. These simulations are driven by the past mass balance record. Afterwards, the future assessment is estimated by using climatic drivers provided by a set of General Circulation Models participating in the Climate Model Inter-comparison Project 5 effort. The approach devised in r.glacio.model can be applied to most alpine glaciers to obtain a first-order spatial representation of glacier behavior under climate change.

  3. Krill herd and piecewise-linear initialization algorithms for designing Takagi-Sugeno systems

    NASA Astrophysics Data System (ADS)

    Hodashinsky, I. A.; Filimonenko, I. V.; Sarin, K. S.

    2017-07-01

    A method for designing Takagi-Sugeno fuzzy systems is proposed which uses a piecewiselinear initialization algorithm for structure generation and a metaheuristic krill herd algorithm for parameter optimization. The obtained systems are tested against real data sets. The influence of some parameters of this algorithm on the approximation accuracy is analyzed. Estimates of the approximation accuracy and the number of fuzzy rules are compared with four known methods of design.

  4. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  5. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  6. Practical implementation of a particle filter data assimilation approach to estimate initial hydrologic conditions and initialize medium-range streamflow forecasts

    NASA Astrophysics Data System (ADS)

    Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.

    2016-12-01

    The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1

  7. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  8. Feasibility study of determining axial stress in ferromagnetic bars using reciprocal amplitude of initial differential susceptibility obtained from static magnetization by permanent magnets

    NASA Astrophysics Data System (ADS)

    Deng, Dongge; Wu, Xinjun

    2018-03-01

    An electromagnetic method for determining axial stress in ferromagnetic bars is proposed. In this method, the tested bar is under the static magnetization provided by permanent magnets. The tested bar do not have to be magnetized up to the technical saturation because reciprocal amplitude of initial differential susceptibility (RAIDS) is adopted as the feature parameter. RAIDS is calculated from the radial magnetic flux density Br Lo = 0.5 at the Lift-off Lo = 0.5 mm, radial magnetic flux density Br Lo = 1 at the Lift-off Lo = 1 mm and axial magnetic flux density Bz Lo = 1 at the Lift-off Lo = 1 mm from the surface of the tested bar. Firstly, the theoretical derivation of RAIDS is carried out according to Gauss' law for magnetism, Ampere's Law and the Rayleigh relation in Rayleigh region. Secondly, the experimental system is set up for a 2-meter length and 20 mm diameter steel bar. Thirdly, an experiment is carried out on the steel bar to analyze the relationship between the obtained RAIDS and the axial stress. Experimental results show that the obtained RAIDS decreases almost linearly with the increment of the axial stress inside the steel bar in the initial elastic region. The proposed method has the potential to determine tensile axial stress in the slender cylindrical ferromagnetic bar.

  9. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  10. The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment

    NASA Technical Reports Server (NTRS)

    Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.

    1990-01-01

    The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.

  11. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  12. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Statistical distribution of time to crack initiation and initial crack size using service data

    NASA Technical Reports Server (NTRS)

    Heller, R. A.; Yang, J. N.

    1977-01-01

    Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.

  14. New Theory for Tsunami Propagation and Estimation of Tsunami Source Parameters

    NASA Astrophysics Data System (ADS)

    Mindlin, I. M.

    2007-12-01

    In numerical studies based on the shallow water equations for tsunami propagation, vertical accelerations and velocities within the sea water are neglected, so a tsunami is usually supposed to be produced by an initial free surface displacement in the initially still sea. In the present work, new theory for tsunami propagation across the deep sea is discussed, that accounts for the vertical accelerations and velocities. The theory is based on the solutions for the water surface displacement obtained in [Mindlin I.M. Integrodifferential equations in dynamics of a heavy layered liquid. Moscow: Nauka*Fizmatlit, 1996 (Russian)]. The solutions are valid when horizontal dimensions of the initially disturbed area in the sea surface are much larger than the vertical displacement of the surface, which applies to the earthquake tsunamis. It is shown that any tsunami is a combination of specific basic waves found analytically (not superposition: the waves are nonlinear), and consequently, the tsunami source (i.e., the initially disturbed body of water) can be described by the numerable set of the parameters involved in the combination. Thus the problem of theoretical reconstruction of a tsunami source is reduced to the problem of estimation of the parameters. The tsunami source can be modelled approximately with the use of a finite number of the parameters. Two-parametric model is discussed thoroughly. A method is developed for estimation of the model's parameters using the arrival times of the tsunami at certain locations, the maximum wave-heights obtained from tide gauge records at the locations, and the distances between the earthquake's epicentre and each of the locations. In order to evaluate the practical use of the theory, four tsunamis of different magnitude occurred in Japan are considered. For each of the tsunamis, the tsunami energy (E below), the duration of the tsunami source formation T, the maximum water elevation in the wave originating area H, mean radius of

  15. Initial data sets for the Schwarzschild spacetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez-Lobo, Alfonso Garcia-Parrado; Kroon, Juan A. Valiente; School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS

    2007-01-15

    A characterization of initial data sets for the Schwarzschild spacetime is provided. This characterization is obtained by performing a 3+1 decomposition of a certain invariant characterization of the Schwarzschild spacetime given in terms of concomitants of the Weyl tensor. This procedure renders a set of necessary conditions--which can be written in terms of the electric and magnetic parts of the Weyl tensor and their concomitants--for an initial data set to be a Schwarzschild initial data set. Our approach also provides a formula for a static Killing initial data set candidate--a KID candidate. Sufficient conditions for an initial data set tomore » be a Schwarzschild initial data set are obtained by supplementing the necessary conditions with the requirement that the initial data set possesses a stationary Killing initial data set of the form given by our KID candidate. Thus, we obtain an algorithmic procedure of checking whether a given initial data set is Schwarzschildean or not.« less

  16. Comparison of Sun-Induced Chlorophyll Fluorescence Estimates Obtained from Four Portable Field Spectroradiometers

    NASA Technical Reports Server (NTRS)

    Julitta, Tommaso; Corp, Lawrence A.; Rossini, Micol; Burkart, Andreas; Cogliati, Sergio; Davies, Neville; Hom, Milton; Mac Arthur, Alasdair; Middleton, Elizabeth M.; Rascher, Uwe; hide

    2016-01-01

    Remote Sensing of Sun-Induced Chlorophyll Fluorescence (SIF) is a research field of growing interest because it offers the potential to quantify actual photosynthesis and to monitor plant status. New satellite missions from the European Space Agency, such as the Earth Explorer 8 FLuorescence EXplorer (FLEX) mission-scheduled to launch in 2022 and aiming at SIF mapping-and from the National Aeronautics and Space Administration (NASA) such as the Orbiting Carbon Observatory-2 (OCO-2) sampling mission launched in July 2014, provide the capability to estimate SIF from space. The detection of the SIF signal from airborne and satellite platform is difficult and reliable ground level data are needed for calibration/validation. Several commercially available spectroradiometers are currently used to retrieve SIF in the field. This study presents a comparison exercise for evaluating the capability of four spectroradiometers to retrieve SIF. The results show that an accurate far-red SIF estimation can be achieved using spectroradiometers with an ultrafine resolution (less than 1 nm), while the red SIF estimation requires even higher spectral resolution (less than 0.5 nm). Moreover, it is shown that the Signal to Noise Ratio (SNR) plays a significant role in the precision of the far-red SIF measurements.

  17. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  18. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  19. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  20. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    PubMed

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  1. Practical implementation of a particle filter data assimilation approach to estimate initial hydrologic conditions and initialize medium-range streamflow forecasts

    NASA Astrophysics Data System (ADS)

    Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast

  2. 19 CFR 201.9 - Methods employed in obtaining information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Methods employed in obtaining information. 201.9 Section 201.9 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations § 201.9 Methods employed in obtaining information. In...

  3. 19 CFR 201.9 - Methods employed in obtaining information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Methods employed in obtaining information. 201.9 Section 201.9 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations § 201.9 Methods employed in obtaining information. In...

  4. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  7. 13 CFR 142.36 - Can I obtain judicial review?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Can I obtain judicial review? 142.36 Section 142.36 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION PROGRAM FRAUD CIVIL REMEDIES ACT REGULATIONS Decisions and Appeals § 142.36 Can I obtain judicial review? If the initial...

  8. A comparison of tools for remotely estimating leaf area index in loblolly pine plantations

    Treesearch

    Janet C. Dewey; Scott D. Roberts; Isobel Hartley

    2006-01-01

    Light interception is critical to forest growth and is largely determined by foliage area per unit ground, the measure of which is leaf area index (LAI). Summer and winter LAI estimates were obtained in a 17-year-old loblolly pine (Pinus taeda L.) spacing trial in Mississippi, using three replications with initial spacings of 1.5, 2.4, and 3.0 m....

  9. Estimation of Crack Initiation and Propagation Thresholds of Confined Brittle Coal Specimens Based on Energy Dissipation Theory

    NASA Astrophysics Data System (ADS)

    Ning, Jianguo; Wang, Jun; Jiang, Jinquan; Hu, Shanchao; Jiang, Lishuai; Liu, Xuesheng

    2018-01-01

    A new energy-dissipation method to identify crack initiation and propagation thresholds is introduced. Conventional and cyclic loading-unloading triaxial compression tests and acoustic emission experiments were performed for coal specimens from a 980-m deep mine with different confining pressures of 10, 15, 20, 25, 30, and 35 MPa. Stress-strain relations, acoustic emission patterns, and energy evolution characteristics obtained during the triaxial compression tests were analyzed. The majority of the input energy stored in the coal specimens took the form of elastic strain energy. After the elastic-deformation stage, part of the input energy was consumed by stable crack propagation. However, with an increase in stress levels, unstable crack propagation commenced, and the energy dissipation and coal damage were accelerated. The variation in the pre-peak energy-dissipation ratio was consistent with the coal damage. This new method demonstrates that the crack initiation threshold was proportional to the peak stress ( σ p) for ratios that ranged from 0.4351 to 0.4753 σ p, and the crack damage threshold ranged from 0.8087 to 0.8677 σ p.

  10. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  11. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  12. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  13. Incompressible limit of the degenerate quantum compressible Navier-Stokes equations with general initial data

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam; Li, Fucai

    2018-03-01

    In this paper we study the incompressible limit of the degenerate quantum compressible Navier-Stokes equations in a periodic domain T3 and the whole space R3 with general initial data. In the periodic case, by applying the refined relative entropy method and carrying out the detailed analysis on the oscillations of velocity, we prove rigorously that the gradient part of the weak solutions (velocity) of the degenerate quantum compressible Navier-Stokes equations converge to the strong solution of the incompressible Navier-Stokes equations. Our results improve considerably the ones obtained by Yang, Ju and Yang [25] where only the well-prepared initial data case is considered. While for the whole space case, thanks to the Strichartz's estimates of linear wave equations, we can obtain the convergence of the weak solutions of the degenerate quantum compressible Navier-Stokes equations to the strong solution of the incompressible Navier-Stokes/Euler equations with a linear damping term. Moreover, the convergence rates are also given.

  14. Estimated reductions in provider-initiated preterm births and hospital length of stay under a universal acetylsalicylic acid prophylaxis strategy: a retrospective cohort study

    PubMed Central

    Ray, Joel G.; Bartsch, Emily; Park, Alison L.; Shah, Prakesh S.; Dzakpasu, Susie

    2017-01-01

    Background: Hypertensive disorders, especially preeclampsia, are the leading reason for provider-initiated preterm birth. We estimated how universal acetylsalicylic acid (ASA) prophylaxis might reduce rates of provider-initiated preterm birth associated with preeclampsia and intrauterine growth restriction, which are related conditions. Methods: We performed a cohort study of singleton hospital births in 2013 in Canada, excluding Quebec. We estimated the proportion of term births and provider-initiated preterm births affected by preeclampsia and/or intrauterine growth restriction, and the corresponding mean maternal and newborn hospital length of stay. We projected the potential number of cases reduced and corresponding hospital length of stay if ASA prophylaxis lowered cases of preeclampsia and intrauterine growth restriction by a relative risk reduction (RRR) of 10% (lowest) or 53% (highest), as suggested by randomized clinical trials. Results: Of the 269 303 singleton live births and stillbirths in our cohort, 4495 (1.7%) were provider-initiated preterm births. Of the 4495, 1512 (33.6%) had a diagnosis of preeclampsia and/or intrauterine growth restriction. The mean maternal length of stay was 2.0 (95% confidence interval [CI] 2.0-2.0) days among term births unaffected by either condition and 7.3 (95% CI 6.1-8.6) days among provider-initiated preterm births with both conditions. The corresponding values for mean newborn length of stay were 1.9 (95% CI 1.8-1.9) days and 21.8 (95% CI 17.4-26.2) days. If ASA conferred a 53% RRR against preeclampsia and/or intrauterine growth restriction, 3365 maternal and 11 591 newborn days in hospital would be averted. If ASA conferred a 10% RRR, 635 maternal and 2187 newborn days in hospital would be averted. Interpretation: A universal ASA prophylaxis strategy could substantially reduce the burden of long maternal and newborn hospital stays associated with provider-initiated preterm birth. However, until there is compelling

  15. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    PubMed Central

    Xu, Nan; Spreng, R. Nathan; Doerschuk, Peter C.

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain. PMID:28559793

  16. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach.

    PubMed

    Xu, Nan; Spreng, R Nathan; Doerschuk, Peter C

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  17. Nonparametric Discrete Survival Function Estimation with Uncertain Endpoints Using an Internal Validation Subsample

    PubMed Central

    Zee, Jarcy; Xie, Sharon X.

    2015-01-01

    Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510

  18. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm

    NASA Astrophysics Data System (ADS)

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.

  19. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm.

    PubMed

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Estimating the Benefits of the Air Force Purchasing and Supply Chain Management Initiative

    DTIC Science & Technology

    2008-01-01

    sector, known as strategic sourcing.6 The Customer Relationship Management initiative ( CRM ) pro- vides a single customer point of contact for all... Customer Relationship Management initiative. commodity council A term used to describe a cross-functional sourc- ing group charged with formulating a...initiative has four major components, all based on commercial best practices (Gabreski, 2004): commodity councils customer relationship management

  1. Runoff simulation sensitivity to remotely sensed initial soil water content

    NASA Astrophysics Data System (ADS)

    Goodrich, D. C.; Schmugge, T. J.; Jackson, T. J.; Unkrich, C. L.; Keefer, T. O.; Parry, R.; Bach, L. B.; Amer, S. A.

    1994-05-01

    A variety of aircraft remotely sensed and conventional ground-based measurements of volumetric soil water content (SW) were made over two subwatersheds (4.4 and 631 ha) of the U.S. Department of Agriculture's Agricultural Research Service Walnut Gulch experimental watershed during the 1990 monsoon season. Spatially distributed soil water contents estimated remotely from the NASA push broom microwave radiometer (PBMR), an Institute of Radioengineering and Electronics (IRE) multifrequency radiometer, and three ground-based point methods were used to define prestorm initial SW for a distributed rainfall-runoff model (KINEROS; Woolhiser et al., 1990) at a small catchment scale (4.4 ha). At a medium catchment scale (631 ha or 6.31 km2) spatially distributed PBMR SW data were aggregated via stream order reduction. The impacts of the various spatial averages of SW on runoff simulations are discussed and are compared to runoff simulations using SW estimates derived from a simple daily water balance model. It was found that at the small catchment scale the SW data obtained from any of the measurement methods could be used to obtain reasonable runoff predictions. At the medium catchment scale, a basin-wide remotely sensed average of initial water content was sufficient for runoff simulations. This has important implications for the possible use of satellite-based microwave soil moisture data to define prestorm SW because the low spatial resolutions of such sensors may not seriously impact runoff simulations under the conditions examined. However, at both the small and medium basin scale, adequate resources must be devoted to proper definition of the input rainfall to achieve reasonable runoff simulations.

  2. Initial mass functions from ultraviolet stellar photometry: A comparison of Lucke and Hodge OB associations near 30 Doradus with the nearby field

    NASA Technical Reports Server (NTRS)

    Hill, Jesse K.; Isensee, Joan E.; Cornett, Robert H.; Bohlin, Ralph C.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Stecher, Theodore P.

    1994-01-01

    UV stellar photometry is presented for 1563 stars within a 40 minutes circular field in the Large Magellanic Cloud (LMC), excluding the 10 min x 10 min field centered on R136 investigated earlier by Hill et al. (1993). Magnitudes are computed from images obtained by the Ultraviolet Imaging Telescope (UIT) in bands centered at 1615 A and 2558 A. Stellar masses and extinctions are estimated for the stars in associations using the evolutionary models of Schaerer et al. (1993), assuming the age is 4 Myr and that the local LMC extinction follows the Fitzpatrick (1985) 30 Dor extinction curve. The estimated slope of the initial mass function (IMF) for massive stars (greater than 15 solar mass) within the Lucke and Hodge (LH) associations is Gamma = -1.08 +/- 0.2. Initial masses and extinctions for stars not within LH associations are estimated assuming that the stellar age is either 4 Myr or half the stellar lifetime, whichever is larger. The estimated slope of the IMF for massive stars not within LH associations is Gamma = -1.74 +/- 0.3 (assuming continuous star formation), compared with Gamma = -1.35, and Gamma = -1.7 +/- 0.5, obtained for the Galaxy by Salpeter (1955) and Scalo (1986), respectively, and Gamma = -1.6 obtained for massive stars in the Galaxy by Garmany, Conti, & Chiosi (1982). The shallower slope of the association IMF suggests that not only is the star formation rate higher in associations, but that the local conditions favor the formation of higher mass stars there. We make no corrections for binaries or incompleteness.

  3. Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.

    2014-12-01

    Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.

  4. Multiple populations within globular clusters in Early-type galaxies Exploring their effect on stellar initial mass function estimates

    NASA Astrophysics Data System (ADS)

    Chantereau, W.; Usher, C.; Bastian, N.

    2018-05-01

    It is now well-established that most (if not all) ancient globular clusters host multiple populations, that are characterised by distinct chemical features such as helium abundance variations along with N-C and Na-O anti-correlations, at fixed [Fe/H]. These very distinct chemical features are similar to what is found in the centres of the massive early-type galaxies and may influence measurements of the global properties of the galaxies. Additionally, recent results have suggested that M/L variations found in the centres of massive early-type galaxies might be due to a bottom-heavy stellar initial mass function. We present an analysis of the effects of globular cluster-like multiple populations on the integrated properties of early-type galaxies. In particular, we focus on spectral features in the integrated optical spectrum and the global mass-to-light ratio that have been used to infer variations in the stellar initial mass function. To achieve this we develop appropriate stellar population synthesis models and take into account, for the first time, an initial-final mass relation which takes into consideration a varying He abundance. We conclude that while the multiple populations may be present in massive early-type galaxies, they are likely not responsible for the observed variations in the mass-to-light ratio and IMF sensitive line strengths. Finally, we estimate the fraction of stars with multiple populations chemistry that come from disrupted globular clusters within massive ellipticals and find that they may explain some of the observed chemical patterns in the centres of these galaxies.

  5. New formulations for tsunami runup estimation

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Aydin, B.; Ceylan, N.

    2017-12-01

    We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).

  6. Estimating the costs of human space exploration

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.

    1994-01-01

    The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.

  7. The estimation of probable maximum precipitation: the case of Catalonia.

    PubMed

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  8. Algorithms for Autonomous GPS Orbit Determination and Formation Flying: Investigation of Initialization Approaches and Orbit Determination for HEO

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)

    2002-01-01

    This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.

  9. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  10. Estimating avian population size using Bowden's estimator

    USGS Publications Warehouse

    Diefenbach, D.R.

    2009-01-01

    Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N < 50) unless a large percentage of the population was marked (>75%) and multiple (≥8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ≥ 0.5 if N ≤ 100 or pm > 0.1 if N ≥ 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates.

  11. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    PubMed

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  12. The spatial structure of magnetospheric plasma disturbance estimated by using magnetic data obtained by SWARM satellites.

    NASA Astrophysics Data System (ADS)

    Yokoyama, Y.; Iyemori, T.; Aoyama, T.

    2017-12-01

    Field-aligned currents with various spatial scales flow into and out from high-latitude ionosphere. The magnetic fluctuations observed by LEO satellites along their orbits having period longer than a few seconds can be regarded as the manifestations of spatial structure of field aligned currents.This has been confirmed by using the initial orbital characteristics of 3 SWARM-satellites. From spectral analysis, we evaluated the spectral indices of these magnetic fluctuations and investigated their dependence on regions, such as magnetic latitude and MLT and so on. We found that the spectral indices take quite different values between the regions lower than the equatorward boundary of the auroral oval (around 63 degrees' in magnetic latitude) and the regions higher than that. On the other hands, we could not find the clear MLT dependence. In general, the FACs are believed to be generated in the magnetiospheric plasma sheet and boundary layer, and they flow along the field lines conserving their currents.The theory of FAC generation [e.g., Hasegawa and Sato ,1978] indicates that the FACs are strongly connected with magnetospheric plasma disturbances. Although the spectral indices above are these of spatial structures of the FACs over the ionosphere, by using the theoretical equation of FAC generation, we evaluate the spectral indices of magnetospheric plasma disturbance in FAC's generation regions. Furthermore, by projecting the area of fluctuations on the equatorial plane of magnetosphere (i.e. plasma sheet), we can estimate the spatial structure of magnetospheric plasma disturbance. In this presentation, we focus on the characteristics of disturbance in midnight region and discuss the relations to the substorm.

  13. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  14. The Influence of Chain Microstructure of Biodegradable Copolyesters Obtained with Low-Toxic Zirconium Initiator to In Vitro Biocompatibility

    PubMed Central

    Orchel, Arkadiusz; Kasperczyk, Janusz; Marcinkowski, Andrzej; Pamula, Elzbieta; Orchel, Joanna; Bielecki, Ireneusz

    2013-01-01

    Because of the wide use of biodegradable materials in tissue engineering, it is necessary to obtain biocompatible polymers with different mechanical and physical properties as well as degradation ratio. Novel co- and terpolymers of various composition and chain microstructure have been developed and applied for cell culture. The aim of this study was to evaluate the adhesion and proliferation of human chondrocytes to four biodegradable copolymers: lactide-coglycolide, lactide-co-ε-caprolactone, lactide-co-trimethylene carbonate, glycolide-co-ε-caprolactone, and one terpolymer glycolide-colactide-co-ε-caprolactone synthesized with the use of zirconium acetylacetonate as a nontoxic initiator. Chain microstructure of the copolymers was analyzed by means of 1H and 13C NMR spectroscopy and surface properties by AFM technique. Cell adhesion and proliferation were determined by CyQUANT Cell Proliferation Assay Kit. After 4 h the chondrocyte adhesion on the surface of studied materials was comparable to standard TCPS. Cell proliferation occurred on all the substrates; however, among the studied polymers poly(L-lactide-coglycolide) 85 : 15 that characterized the most blocky structure best supported cell growth. Chondrocytes retained the cell membrane integrity evaluated by the LDH release assay. As can be summarized from the results of the study, all the studied polymers are well tolerated by the cells that make them appropriate for human chondrocytes growth. PMID:24062998

  15. Technology Estimating: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; Reeves, John D.; Williams-Byrd, Julie A.; Greenberg, Marc; Comstock, Doug; Olds, John R.; Wallace, Jon; DePasquale, Dominic; Schaffer, Mark

    2013-01-01

    NASA is investing in new technologies that include 14 primary technology roadmap areas, and aeronautics. Understanding the cost for research and development of these technologies and the time it takes to increase the maturity of the technology is important to the support of the ongoing and future NASA missions. Overall, technology estimating may help provide guidance to technology investment strategies to help improve evaluation of technology affordability, and aid in decision support. The research provides a summary of the framework development of a Technology Estimating process where four technology roadmap areas were selected to be studied. The framework includes definition of terms, discussion for narrowing the focus from 14 NASA Technology Roadmap areas to four, and further refinement to include technologies, TRL range of 2 to 6. Included in this paper is a discussion to address the evaluation of 20 unique technology parameters that were initially identified, evaluated and then subsequently reduced for use in characterizing these technologies. A discussion of data acquisition effort and criteria established for data quality are provided. The findings obtained during the research included gaps identified, and a description of a spreadsheet-based estimating tool initiated as a part of the Technology Estimating process.

  16. The Vertical Linear Fractional Initialization Problem

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    1999-01-01

    This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.

  17. Comparative estimation and assessment of initial soil moisture conditions for Flash Flood warning in Saxony

    NASA Astrophysics Data System (ADS)

    Luong, Thanh Thi; Kronenberg, Rico; Bernhofer, Christian; Janabi, Firas Al; Schütze, Niels

    2017-04-01

    Flash Floods are known as highly destructive natural hazards due to their sudden appearance and severe consequences. In Saxony/Germany flash floods occur in small and medium catchments of low mountain ranges which are typically ungauged. Besides rainfall and orography, pre-event moisture is decisive, as it determines the available natural retention in the catchment. The Flash Flood Guidance concept according to WMO and Prof. Marco Borga (University of Padua) will be adapted to incorporate pre-event moisture in real-time flood forecast within the ESF EXTRUSO project (SAB-Nr. 100270097). To arrive at pre-event moisture for the complete area of the low mountain range with flash flood potential, a widely applicable, accurate but yet simple approach is needed. Here, we use radar precipitation as input time series, detailed orographic, land-use and soil information and a lumped parameter model to estimate the overall catchment soil moisture and potential retention. When combined with rainfall forecast and its intrinsic uncertainty, the approach allows to find the point in time when precipitation exceeds the retention potential of the catchment. Then, spatially distributed and complex hydrological modeling and additional measurements can be initiated. Assuming reasonable rainfall forecasts of 24 to 48hrs, this part can start up to two days in advance of the actual event. The lumped-parameter model BROOK90 is used and tested for well observed catchments. First, physical meaningful parameters (like albedo or soil porosity) a set according to standards and second, "free" parameters (like percentage of lateral flow) were calibrated objectively by PEST (Model-Independent Parameter Estimation and Uncertainty Analysis) with the target on evapotranspiration and soil moisture which both have been measured at the study site Anchor Station Tharandt in Saxony/Germany. Finally, first results are presented for the Wernersbach catchment in Tharandt forest for main flood events in the 50

  18. Clinical and Genetic Determinants of Warfarin Pharmacokinetics and Pharmacodynamics during Treatment Initiation

    PubMed Central

    Gong, Inna Y.; Schwarz, Ute I.; Crown, Natalie; Dresser, George K.; Lazo-Langner, Alejandro; Zou, GuangYong; Roden, Dan M.; Stein, C. Michael; Rodger, Marc; Wells, Philip S.; Kim, Richard B.; Tirona, Rommel G.

    2011-01-01

    Variable warfarin response during treatment initiation poses a significant challenge to providing optimal anticoagulation therapy. We investigated the determinants of initial warfarin response in a cohort of 167 patients. During the first nine days of treatment with pharmacogenetics-guided dosing, S-warfarin plasma levels and international normalized ratio were obtained to serve as inputs to a pharmacokinetic-pharmacodynamic (PK-PD) model. Individual PK (S-warfarin clearance) and PD (Imax) parameter values were estimated. Regression analysis demonstrated that CYP2C9 genotype, kidney function, and gender were independent determinants of S-warfarin clearance. The values for Imax were dependent on VKORC1 and CYP4F2 genotypes, vitamin K status (as measured by plasma concentrations of proteins induced by vitamin K absence, PIVKA-II) and weight. Importantly, indication for warfarin was a major independent determinant of Imax during initiation, where PD sensitivity was greater in atrial fibrillation than venous thromboembolism. To demonstrate the utility of the global PK-PD model, we compared the predicted initial anticoagulation responses with previously established warfarin dosing algorithms. These insights and modeling approaches have application to personalized warfarin therapy. PMID:22114699

  19. Wheat productivity estimates using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Colwell, J. E. (Principal Investigator); Rice, D. P.; Bresnahan, P. A.

    1977-01-01

    The author has identified the following significant results. Large area LANDSAT yield estimates were generated. These results were compared with estimates computed using a meteorological yield model (CCEA). Both of these estimates were compared with Kansas Crop and Livestock Reporting Service (KCLRS) estimates of yield, in an attempt to assess the relative and absolute accuracy of the LANDSAT and CCEA estimates. Results were inconclusive. A large area direct wheat prediction procedure was implemented. Initial results have produced a wheat production estimate comparable with the KCLRS estimate.

  20. The Use of Indirect Estimates of Soil Moisture to Initialize Coupled Models and its Impact on Short-Term and Seasonal Simulations

    NASA Technical Reports Server (NTRS)

    Lapenta, William M.; Crosson, William; Dembek, Scott; Lakhtakia, Mercedes

    1998-01-01

    It is well known that soil moisture is a characteristic of the land surface that strongly affects the partitioning of outgoing radiation into sensible and latent heat which significantly impacts both weather and climate. Detailed land surface schemes are now being coupled to mesoscale atmospheric models in order to represent the effect of soil moisture upon atmospheric simulations. However, there is little direct soil moisture data available to initialize these models on regional to continental scales. As a result, a Soil Hydrology Model (SHM) is currently being used to generate an indirect estimate of the soil moisture conditions over the continental United States at a grid resolution of 36 Km on a daily basis since 8 May 1995. The SHM is forced by analyses of atmospheric observations including precipitation and contains detailed information on slope soil and landcover characteristics.The purpose of this paper is to evaluate the utility of initializing a detailed coupled model with the soil moisture data produced by SHM.

  1. Simulating the Surface Relief of Nanoaerosols Obtained via the Rapid Cooling of Droplets

    NASA Astrophysics Data System (ADS)

    Tovbin, Yu. K.; Zaitseva, E. S.; Rabinovich, A. B.

    2018-03-01

    An approach is formulated that theoretically describes the structure of a rough surface of small aerosol particles obtained from a liquid droplet upon its rapid cooling. The problem consists of two stages. In the first stage, a concentration profile of the droplet-vapor transition region is calculated. In the second stage, local fractions of vacant sites and their pairs are found on the basis of this profile, and the rough structure of a frozen droplet surface transitioning to the solid state is calculated. Model parameters are the temperature of the initial droplet and those of the lateral interaction between droplet atoms. Information on vacant sites inside the region of transition allows us to identify adsorption centers and estimate the monolayer capacity, compared to that of the total space of the region of transition. The approach is oriented toward calculating adsorption isotherms on real surfaces.

  2. Estimated mortality of adult HIV-infected patients starting treatment with combination antiretroviral therapy

    PubMed Central

    Yiannoutsos, Constantin Theodore; Johnson, Leigh Francis; Boulle, Andrew; Musick, Beverly Sue; Gsponer, Thomas; Balestre, Eric; Law, Matthew; Shepherd, Bryan E; Egger, Matthias

    2012-01-01

    Objective To provide estimates of mortality among HIV-infected patients starting combination antiretroviral therapy. Methods We report on the death rates from 122 925 adult HIV-infected patients aged 15 years or older from East, Southern and West Africa, Asia Pacific and Latin America. We use two methods to adjust for biases in mortality estimation resulting from loss from follow-up, based on double-sampling methods applied to patient outreach (Kenya) and linkage with vital registries (South Africa), and apply these to mortality estimates in the other three regions. Age, gender and CD4 count at the initiation of therapy were the factors considered as predictors of mortality at 6, 12, 24 and >24 months after the start of treatment. Results Patient mortality was high during the first 6 months after therapy for all patient subgroups and exceeded 40 per 100 patient years among patients who started treatment at low CD4 count. This trend was seen regardless of region, demographic or disease-related risk factor. Mortality was under-reported by up to or exceeding 100% when comparing estimates obtained from passive monitoring of patient vital status. Conclusions Despite advances in antiretroviral treatment coverage many patients start treatment at very low CD4 counts and experience significant mortality during the first 6 months after treatment initiation. Active patient tracing and linkage with vital registries are critical in adjusting estimates of mortality, particularly in low- and middle-income settings. PMID:23172344

  3. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  4. 21 CFR 1315.34 - Obtaining an import quota.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Obtaining an import quota. 1315.34 Section 1315.34 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE IMPORTATION AND PRODUCTION QUOTAS... imports, the estimated medical, scientific, and industrial needs of the United States, the establishment...

  5. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  6. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  7. Mobile computing initiatives within pharmacy education.

    PubMed

    Cain, Jeff; Bird, Eleanora R; Jones, Mikael

    2008-08-15

    To identify mobile computing initiatives within pharmacy education, including how devices are obtained, supported, and utilized within the curriculum. An 18-item questionnaire was developed and delivered to academic affairs deans (or closest equivalent) of 98 colleges and schools of pharmacy. Fifty-four colleges and schools completed the questionnaire for a 55% completion rate. Thirteen of those schools have implemented mobile computing requirements for students. Twenty schools reported they were likely to formally consider implementing a mobile computing initiative within 5 years. Numerous models of mobile computing initiatives exist in terms of device obtainment, technical support, infrastructure, and utilization within the curriculum. Responders identified flexibility in teaching and learning as the most positive aspect of the initiatives and computer-aided distraction as the most negative, Numerous factors should be taken into consideration when deciding if and how a mobile computing requirement should be implemented.

  8. Communication: Estimating the initial biasing potential for λ-local-elevation umbrella-sampling (λ-LEUS) simulations via slow growth

    NASA Astrophysics Data System (ADS)

    Bieler, Noah S.; Hünenberger, Philippe H.

    2014-11-01

    In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006-3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires "filling up" all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.

  9. Reduction of initial shock in decadal predictions using a new initialization strategy

    NASA Astrophysics Data System (ADS)

    He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei

    2017-08-01

    A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.

  10. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  11. Brain-computer interface for alertness estimation and improving

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina

    2018-02-01

    Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.

  12. Reciprocal Sliding Friction Model for an Electro-Deposited Coating and Its Parameter Estimation Using Markov Chain Monte Carlo Method

    PubMed Central

    Kim, Kyungmok; Lee, Jaewook

    2016-01-01

    This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359

  13. Concurrent initialization for Bearing-Only SLAM.

    PubMed

    Munguía, Rodrigo; Grau, Antoni

    2010-01-01

    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes.

  14. Concurrent Initialization for Bearing-Only SLAM

    PubMed Central

    Munguía, Rodrigo; Grau, Antoni

    2010-01-01

    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes. PMID:22294884

  15. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  16. Financial planning for major initiatives: a framework for success.

    PubMed

    Harris, John M

    2007-11-01

    A solid framework for assessing a major strategic initiative consists of four broad steps: Initial considerations, including level of analysis required and resources that will be brought to bear. Preliminary financial estimates for board approval to further assess the initiative. Assessment of potential partners' interest in the project. Feasibility analysis for board green light.

  17. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  18. Estimation of Length-Scales in Soils by MRI

    NASA Technical Reports Server (NTRS)

    Daidzic, N. E.; Altobelli, S.; Alexander, J. I. D.

    2004-01-01

    Soil can be best described as an unconsolidated granular media that forms porous structure. The present macroscopic theory of water transport in porous media rests upon the continuum hypothesis that the physical properties of porous media can be associated with continuous, twice-differentiable field variables whose spatial domain is a set of centroids of Representative Elementary Volume (REV) elements. MRI is an ideal technique to estimate various length-scales in porous media. A 0.267 T permanent magnet at NASA GRC was used for this study. A 2D or 3D spatially-resolved porosity distribution were obtained from the NMR signal strength from each voxel and the spin-lattice relaxation time. A classical spin-warp imaging with Multiple Spin Echos (MSE) was used to evaluate proton density in each voxel. Initial resolution of 256 x 256 was subsequently reduced by averaging neighboring voxels and the porosity convergence was observed. A number of engineered "space candidate" soils such as Isolite(trademark), Zeoponics(trademark), Turface(trademark), and Profile(trademark) were used. Glass beads in the size range between 50 microns to 2 mm were used as well. Initial results with saturated porous samples have shown a good estimate of the average porosity consistent with the gravimetric porosity measurement results. For Profile(trademark) samples with particle sizes ranging between 0.25 to 1 mm and characteristic interparticle pore size of 100 microns the characteristic Darcy scale was estimated to be about delta(sub REV) = 10 mm. Glass beads porosity show clear convergence toward a definite REV which stays constant throughout homogeneous sample. Additional information is included in the original extended abstract.

  19. Reduction of initial shock in decadal predictions using a new initialization strategy

    NASA Astrophysics Data System (ADS)

    He, Yujun; Wang, Bin

    2017-04-01

    Initial shock is a well-known problem occurring in the early years of a decadal prediction when assimilating full-field observations into a coupled model, which directly affects the prediction skill. For the purpose to alleviate this problem, we propose a novel full-field initialization method based on dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar). Different from the available solution strategies including anomaly assimilation and bias correction, it substantially reduces the initial shock through generating more consistent initial conditions for the coupled model, which, along with the model trajectory in one-month windows, best fit the monthly mean analysis data of oceanic temperature and salinity. We evaluate the performance of initialized hindcast experiments according to three proposed indices to measure the intensity of the initial shock. The results indicate that this strategy can obviously reduce the initial shock in decadal predictions by FGOALS-g2 (the Flexible Global Ocean-Atmosphere-Land System model, Grid-point Version 2) compared with the commonly-used nudging full-field initialization for the same model as well as the different full-field initialization strategies for other CMIP5 (the fifth phase of the Coupled Model Intercomparison Project) models whose decadal prediction results are available. It is also comparable to or even better than the anomaly initialization methods. Better hindcasts of global mean surface air temperature anomaly are obtained due to the reduction of initial shock by the new initialization scheme.

  20. Estimating the global incidence of traumatic spinal cord injury.

    PubMed

    Fitzharris, M; Cripps, R A; Lee, B B

    2014-02-01

    Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.

  1. Multiple scene attitude estimator performance for LANDSAT-1

    NASA Technical Reports Server (NTRS)

    Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.

    1979-01-01

    Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.

  2. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  3. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  4. Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Bishop, J. W.; Lees, J. M.; Ruiz, M. C.

    2017-12-01

    Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.

  5. M-estimator for the 3D symmetric Helmert coordinate transformation

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  6. Economic Analysis of Veterans Affairs Initiative to Prevent Methicillin-Resistant Staphylococcus aureus Infections.

    PubMed

    Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A

    2016-05-01

    In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  7. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  8. A novel fluorescence microscopy approach to estimate quality loss of stored fruit fillings as a result of browning.

    PubMed

    Cropotova, Janna; Tylewicz, Urszula; Cocci, Emiliano; Romani, Santina; Dalla Rosa, Marco

    2016-03-01

    The aim of the present study was to estimate the quality deterioration of apple fillings during storage. Moreover, a potentiality of novel time-saving and non-invasive method based on fluorescence microscopy for prompt ascertainment of non-enzymatic browning initiation in fruit fillings was investigated. Apple filling samples were obtained by mixing different quantities of fruit and stabilizing agents (inulin, pectin and gellan gum), thermally processed and stored for 6-month. The preservation of antioxidant capacity (determined by DPPH method) in apple fillings was indirectly correlated with decrease in total polyphenols content that varied from 34±22 to 56±17% and concomitant accumulation of 5-hydroxymethylfurfural (HMF), ranging from 3.4±0.1 to 8±1mg/kg in comparison to initial apple puree values. The mean intensity of the fluorescence emission spectra of apple filling samples and initial apple puree was highly correlated (R(2)>0.95) with the HMF content, showing a good potentiality of fluorescence microscopy method to estimate non-enzymatic browning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    PubMed Central

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  10. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  11. Single-shot quantum state estimation via a continuous measurement in the strong backaction regime

    NASA Astrophysics Data System (ADS)

    Cook, Robert L.; Riofrío, Carlos A.; Deutsch, Ivan H.

    2014-09-01

    We study quantum tomography based on a stochastic continuous-time measurement record obtained from a probe field collectively interacting with an ensemble of identically prepared systems. In comparison to previous studies, we consider here the case in which the measurement-induced backaction has a non-negligible effect on the dynamical evolution of the ensemble. We formulate a maximum likelihood estimate for the initial quantum state given only a single instance of the continuous diffusive measurement record. We apply our estimator to the simplest problem: state tomography of a single pure qubit, which, during the course of the measurement, is also subjected to dynamical control. We identify a regime where the many-body system is well approximated at all times by a separable pure spin coherent state, whose Bloch vector undergoes a conditional stochastic evolution. We simulate the results of our estimator and show that we can achieve close to the upper bound of fidelity set by the optimal generalized measurement. This estimate is compared to, and significantly outperforms, an equivalent estimator that ignores measurement backaction.

  12. Updating histological data on crown initiation and crown completion ages in southern Africans.

    PubMed

    Reid, Donald J; Guatelli-Steinberg, Debbie

    2017-04-01

    To update histological data on crown initiation and completion ages in southern Africans. To evaluate implications of these data for studies that: (a) rely on these data to time linear enamel hypoplasias (LEHs), or, (b) use these data for comparison to fossil hominins. Initiation ages were calculated on 67 histological sections from southern Africans, with sample sizes ranging from one to 11 per tooth type. Crown completion ages for southern Africans were calculated in two ways. First, actual derived initiation ages were added to crown formation times for each histological section to obtain direct information on the crown completion ages of individuals. Second, average initiation ages from this study were added to average crown formation times of southern Africans from the Reid and coworkers previous studies that were based on larger samples. For earlier-initiating tooth types (all anterior teeth and first molars), there is little difference in ages of initiation and crown completion between this and previous studies. Differences increase as a function of initiation age, such that the greatest differences between this and previous studies for both initiation and crown completion ages are for the second and third molars. This study documents variation in initiation ages, particularly for later-initiating tooth types. It upholds the use of previously published histological aging charts for LEHs on anterior teeth. However, this study finds that ages of crown initiation and completion in second and third molars for this southern African sample are earlier than previously estimated. These earlier ages reduce differences between modern humans and fossil hominins for these developmental events in second and third molars. © 2017 Wiley Periodicals, Inc.

  13. Communication: Estimating the initial biasing potential for λ-local-elevation umbrella-sampling (λ-LEUS) simulations via slow growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieler, Noah S.; Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch

    2014-11-28

    In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006–3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires “filling up”more » all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.« less

  14. Initial Results Obtained with the First TWIN VLBI Radio Telescope at the Geodetic Observatory Wettzell

    PubMed Central

    Schüler, Torben; Kronschnabl, Gerhard; Plötz, Christian; Neidhardt, Alexander; Bertarini, Alessandra; Bernhart, Simone; la Porta, Laura; Halsig, Sebastian; Nothnagel, Axel

    2015-01-01

    Geodetic Very Long Baseline Interferometry (VLBI) uses radio telescopes as sensor networks to determine Earth orientation parameters and baseline vectors between the telescopes. The TWIN Telescope Wettzell 1 (TTW1), the first of the new 13.2 m diameter telescope pair at the Geodetic Observatory Wettzell, Germany, is currently in its commissioning phase. The technology behind this radio telescope including the receiving system and the tri-band feed horn is depicted. Since VLBI telescopes must operate at least in pairs, the existing 20 m diameter Radio Telescope Wettzell (RTW) is used together with TTW1 for practical tests. In addition, selected long baseline setups are investigated. Correlation results portraying the data quality achieved during first initial experiments are discussed. Finally, the local 123 m baseline between the old RTW telescope and the new TTW1 is analyzed and compared with an existing high-precision local survey. Our initial results are very satisfactory for X-band group delays featuring a 3D distance agreement between VLBI data analysis and local ties of 1 to 2 mm in the majority of the experiments. However, S-band data, which suffer much from local radio interference due to WiFi and mobile communications, are about 10 times less precise than X-band data and require further analysis, but evidence is provided that S-band data are well-usable over long baselines where local radio interference patterns decorrelate. PMID:26263991

  15. Re-estimating sample size in cluster randomised trials with active recruitment within clusters.

    PubMed

    van Schie, S; Moerbeek, M

    2014-08-30

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression

    NASA Astrophysics Data System (ADS)

    Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.

    2018-05-01

    Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.

  17. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  18. Precision and accuracy of age estimates obtained from anal fin spines, dorsal fin spines, and sagittal otoliths for known-age largemouth bass

    USGS Publications Warehouse

    Klein, Zachary B.; Bonvechio, Timothy F.; Bowen, Bryant R.; Quist, Michael C.

    2017-01-01

    Sagittal otoliths are the preferred aging structure for Micropterus spp. (black basses) in North America because of the accurate and precise results produced. Typically, fisheries managers are hesitant to use lethal aging techniques (e.g., otoliths) to age rare species, trophy-size fish, or when sampling in small impoundments where populations are small. Therefore, we sought to evaluate the precision and accuracy of 2 non-lethal aging structures (i.e., anal fin spines, dorsal fin spines) in comparison to that of sagittal otoliths from known-age Micropterus salmoides (Largemouth Bass; n = 87) collected from the Ocmulgee Public Fishing Area, GA. Sagittal otoliths exhibited the highest concordance with true ages of all structures evaluated (coefficient of variation = 1.2; percent agreement = 91.9). Similarly, the low coefficient of variation (0.0) and high between-reader agreement (100%) indicate that age estimates obtained from sagittal otoliths were the most precise. Relatively high agreement between readers for anal fin spines (84%) and dorsal fin spines (81%) suggested the structures were relatively precise. However, age estimates from anal fin spines and dorsal fin spines exhibited low concordance with true ages. Although use of sagittal otoliths is a lethal technique, this method will likely remain the standard for aging Largemouth Bass and other similar black bass species.

  19. Estimating Missing Features to Improve Multimedia Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagherjeiran, A; Love, N S; Kamath, C

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features.more » In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.« less

  20. Isolator fragmentation and explosive initiation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Rae, Philip John; Foley, Timothy J.

    2015-09-30

    Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimates demonstrate thatmore » even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less

  1. Estimating the thickness of diffusive solid electrolyte interface

    NASA Astrophysics Data System (ADS)

    Wang, XiaoHe; Shen, WenHao; Huang, XianFu; Zang, JinLiang; Zhao, YaPu

    2017-06-01

    The solid electrolyte interface (SEI) is a hierarchical structure formed in the transition zone between the electrode and the electrolyte. The properties of lithium-ion (Li-ion) battery, such as cycle life, irreversible capacity loss, self-discharge rate, electrode corrosion and safety are usually ascribed to the quality of the SEI, which are highly dependent on the thickness. Thus, understanding the formation mechanism and the SEI thickness is of prime interest. First, we apply dimensional analysis to obtain an explicit relation between the thickness and the number density in this study. Then the SEI thickness in the initial charge-discharge cycle is analyzed and estimated for the first time using the Cahn-Hilliard phase-field model. In addition, the SEI thickness by molecular dynamics simulation validates the theoretical results. It has been shown that the established model and the simulation in this paper estimate the SEI thickness concisely within order-of-magnitude of nanometers. Our results may help in evaluating the performance of SEI and assist the future design of Li-ion battery.

  2. Fatigue Life Estimation under Cumulative Cyclic Loading Conditions

    NASA Technical Reports Server (NTRS)

    Kalluri, Sreeramesh; McGaw, Michael A; Halford, Gary R.

    1999-01-01

    The cumulative fatigue behavior of a cobalt-base superalloy, Haynes 188 was investigated at 760 C in air. Initially strain-controlled tests were conducted on solid cylindrical gauge section specimens of Haynes 188 under fully-reversed, tensile and compressive mean strain-controlled fatigue tests. Fatigue data from these tests were used to establish the baseline fatigue behavior of the alloy with 1) a total strain range type fatigue life relation and 2) the Smith-Wastson-Topper (SWT) parameter. Subsequently, two load-level multi-block fatigue tests were conducted on similar specimens of Haynes 188 at the same temperature. Fatigue lives of the multi-block tests were estimated with 1) the Linear Damage Rule (LDR) and 2) the nonlinear Damage Curve Approach (DCA) both with and without the consideration of mean stresses generated during the cumulative fatigue tests. Fatigue life predictions by the nonlinear DCA were much closer to the experimentally observed lives than those obtained by the LDR. In the presence of mean stresses, the SWT parameter estimated the fatigue lives more accurately under tensile conditions than under compressive conditions.

  3. Simulations in site error estimation for direction finders

    NASA Astrophysics Data System (ADS)

    López, Raúl E.; Passi, Ranjit M.

    1991-08-01

    The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.

  4. Bayesian lead time estimation for the Johns Hopkins Lung Project data.

    PubMed

    Jang, Hyejeong; Kim, Seongho; Wu, Dongfeng

    2013-09-01

    Lung cancer screening using X-rays has been controversial for many years. A major concern is whether lung cancer screening really brings any survival benefits, which depends on effective treatment after early detection. The problem was analyzed from a different point of view and estimates were presented of the projected lead time for participants in a lung cancer screening program using the Johns Hopkins Lung Project (JHLP) data. The newly developed method of lead time estimation was applied where the lifetime T was treated as a random variable rather than a fixed value, resulting in the number of future screenings for a given individual is a random variable. Using the actuarial life table available from the United States Social Security Administration, the lifetime distribution was first obtained, then the lead time distribution was projected using the JHLP data. The data analysis with the JHLP data shows that, for a male heavy smoker with initial screening ages at 50, 60, and 70, the probability of no-early-detection with semiannual screens will be 32.16%, 32.45%, and 33.17%, respectively; while the mean lead time is 1.36, 1.33 and 1.23 years. The probability of no-early-detection increases monotonically when the screening interval increases, and it increases slightly as the initial age increases for the same screening interval. The mean lead time and its standard error decrease when the screening interval increases for all age groups, and both decrease when initial age increases with the same screening interval. The overall mean lead time estimated with a random lifetime T is slightly less than that with a fixed value of T. This result is hoped to be of benefit to improve current screening programs. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  5. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  6. Completely automated estimation of prostate volume for 3-D side-fire transrectal ultrasound using shape prior approach

    NASA Astrophysics Data System (ADS)

    Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.

    2008-02-01

    Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.

  7. "A space-time ensemble Kalman filter for state and parameter estimation of groundwater transport models"

    NASA Astrophysics Data System (ADS)

    Briseño, Jessica; Herrera, Graciela S.

    2010-05-01

    of the variables is used as the prior space-time estimate for the Kalman filter, and the space-time cross-covariance matrix of h-ln K-C as the prior estimate-error covariance-matrix. The synthetic example has a modeling area of 700 x 700 square meters; a triangular mesh model with 702 nodes and 1306 elements is used. A pumping well located in the central part of the study area is considered. For the contaminant transport model, a contaminant source area is present in the western part of the study area. The estimation points for hydraulic conductivity, hydraulic head and contaminant concentrations are located on a submesh of the model mesh (same location for h, ln K and c), composed by 48 nodes spread throughout the study area, with an approximately separation of 90 meters between nodes. The results analysis was done through the mean error, root mean square error, initial and final estimation maps of h, ln K and C at each time, and the initial and final variance maps of h, ln K and C. To obtain model convergence, 3000 realizations of ln K were required using SGSim, and only 1000 with LHC. The results show that for both alternatives, the Kalman filter estimates for h, ln K and C using h and C data, have errors which magnitudes decrease as data is added. HERRERA, G. S.(1998), Cost Effective Groundwater Quality Sampling Network Design. Ph. D. thesis, University of Vermont, Burlington, Vermont, 172 pp. HERRERA G., GUARNACCIA J., PINDER G. Y SIMUTA R.(2001),"Diseño de redes de monitoreo de la calidad del agua subterránea eficientes", Proceedings of the 2001 International Symposium on Environmental Hydraulics, Arizona, U.S.A. HERRERA G. S. and PINDER G.F. (2005), Space-time optimization of groundwater quality sampling networks Water Resour. Res., Vol. 41, No. 12, W12407, 10.1029/2004WR003626.

  8. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    PubMed

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  9. Initiation of Detonation in Multiple Shock-Compressed Liquid Explosives

    NASA Astrophysics Data System (ADS)

    Yoshinaka, A. C.; Zhang, F.; Petel, O. E.; Higgins, A. J.

    2006-07-01

    Initiation and resulting propagation of detonation via multiple shock reverberations between two high impedance plates has been investigated in amine-sensitized nitromethane. Experiments were designed so that the first reflected shock strength was below the critical value for initiation found previously. Luminosity combined with a distinct pressure hump indicated onset of reaction and successful initiation after double or triple shock reflection off the bottom plate. Final temperature estimates for double or triple shock reflection immediately before initiation lie between 700-720 K, consistent with those found previously for both incident and singly reflected shock initiation.

  10. Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey

    2015-12-01

    The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.

  11. Kinetics of MDR Transport in Tumor-Initiating Cells

    PubMed Central

    Koshkin, Vasilij; Yang, Burton B.; Krylov, Sergey N.

    2013-01-01

    Multidrug resistance (MDR) driven by ABC (ATP binding cassette) membrane transporters is one of the major causes of treatment failure in human malignancy. MDR capacity is thought to be unevenly distributed among tumor cells, with higher capacity residing in tumor-initiating cells (TIC) (though opposite finding are occasionally reported). Functional evidence for enhanced MDR of TICs was previously provided using a “side population” assay. This assay estimates MDR capacity by a single parameter - cell’s ability to retain fluorescent MDR substrate, so that cells with high MDR capacity (“side population”) demonstrate low substrate retention. In the present work MDR in TICs was investigated in greater detail using a kinetic approach, which monitors MDR efflux from single cells. Analysis of kinetic traces obtained allowed for the estimation of both the velocity (V max) and affinity (K M) of MDR transport in single cells. In this way it was shown that activation of MDR in TICs occurs in two ways: through the increase of V max in one fraction of cells, and through decrease of K M in another fraction. In addition, kinetic data showed that heterogeneity of MDR parameters in TICs significantly exceeds that of bulk cells. Potential consequences of these findings for chemotherapy are discussed. PMID:24223908

  12. Marginal Structural Models for Case-Cohort Study Designs to Estimate the Association of Antiretroviral Therapy Initiation With Incident AIDS or Death

    PubMed Central

    Cole, Stephen R.; Hudgens, Michael G.; Tien, Phyllis C.; Anastos, Kathryn; Kingsley, Lawrence; Chmiel, Joan S.; Jacobson, Lisa P.

    2012-01-01

    To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding. PMID:22302074

  13. Extracardiac conduit obstruction: initial experience in the use of Doppler echocardiography for noninvasive estimation of pressure gradient.

    PubMed

    Reeder, G S; Currie, P J; Fyfe, D A; Hagler, D J; Seward, J B; Tajik, A J

    1984-11-01

    Extracardiac valved conduits are often employed in the repair of certain complex congenital heart defects; late obstruction is a well recognized problem that usually requires catheterization for definitive diagnosis. A reliable noninvasive method for detecting conduit stenosis would be clinically useful in identifying the small proportion of patients who develop this problem. Continuous wave Doppler echocardiography has been used successfully to estimate cardiac valvular obstructive lesions noninvasively. Twenty-three patients with prior extracardiac conduit placement for complex congenital heart disease underwent echocardiographic and continuous wave Doppler echocardiographic examinations to determine the presence and severity of conduit stenosis. In 20 of the 23 patients, an adequate conduit flow velocity profile was obtained, and in 10 an abnormally increased conduit flow velocity was present. All but one patient had significant obstruction proven at surgery and in one patient, surgery was planned. In three patients, an adequate conduit flow velocity profile could not be obtained but obstruction was still suspected based on high velocity tricuspid regurgitant Doppler signals. In these three patients, subsequent surgery also proved that conduit stenosis was present. Doppler-predicted gradients and right ventricular pressures showed an overall good correlation (r = 0.90) with measurements at subsequent cardiac catheterization. Continuous wave Doppler echocardiography appears to be a useful noninvasive tool for the detection and semiquantitation of extracardiac conduit stenosis.

  14. New estimation method of neutron skyshine for a high-energy particle accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook

    2016-09-01

    A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.

  15. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  16. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  17. Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water

    PubMed Central

    Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong

    2017-01-01

    This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442

  18. More realistic power estimation for new user, active comparator studies: an empirical example.

    PubMed

    Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til

    2016-04-01

    Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  20. High-Dose Benzodiazepine Dependence: A Qualitative Study of Patients’ Perceptions on Initiation, Reasons for Use, and Obtainment

    PubMed Central

    Liebrenz, Michael; Schneider, Marcel; Buadze, Anna; Gehring, Marie-Therese; Dube, Anish; Caflisch, Carlo

    2015-01-01

    Background High-dose benzodiazepine (BZD) dependence is associated with a wide variety of negative health consequences. Affected individuals are reported to suffer from severe mental disorders and are often unable to achieve long-term abstinence via recommended discontinuation strategies. Although it is increasingly understood that treatment interventions should take subjective experiences and beliefs into account, the perceptions of this group of individuals remain under-investigated. Methods We conducted an exploratory qualitative study with 41 adult subjects meeting criteria for (high-dose) BZD-dependence, as defined by ICD-10. One-on-one in-depth interviews allowed for an exploration of this group’s views on the reasons behind their initial and then continued use of BZDs, as well as their procurement strategies. Mayring’s qualitative content analysis was used to evaluate our data. Results In this sample, all participants had developed explanatory models for why they began using BZDs. We identified a multitude of reasons that we grouped into four broad categories, as explaining continued BZD use: (1) to cope with symptoms of psychological distress or mental disorder other than substance use, (2) to manage symptoms of physical or psychological discomfort associated with somatic disorder, (3) to alleviate symptoms of substance-related disorders, and (4) for recreational purposes, that is, sensation-seeking and other social reasons. Subjects often considered BZDs less dangerous than other substances and associated their use more often with harm reduction than as recreational. Specific obtainment strategies varied widely: the majority of participants oscillated between legal and illegal methods, often relying on the black market when faced with treatment termination. Conclusions Irrespective of comorbidity, participants expressed a clear preference for medically related explanatory models for their BZD use. We therefore suggest that clinicians consider patients

  1. Using Population Dose to Evaluate Community-level Health Initiatives.

    PubMed

    Harner, Lisa T; Kuo, Elena S; Cheadle, Allen; Rauzon, Suzanne; Schwartz, Pamela M; Parnell, Barbara; Kelly, Cheryl; Solomon, Loel

    2018-05-01

    Successful community-level health initiatives require implementing an effective portfolio of strategies and understanding their impact on population health. These factors are complicated by the heterogeneity of overlapping multicomponent strategies and availability of population-level data that align with the initiatives. To address these complexities, the population dose methodology was developed for planning and evaluating multicomponent community initiatives. Building on the population dose methodology previously developed, this paper operationalizes dose estimates of one initiative targeting youth physical activity as part of the Kaiser Permanente Community Health Initiative, a multicomponent community-level obesity prevention initiative. The technical details needed to operationalize the population dose method are explained, and the use of population dose as an interim proxy for population-level survey data is introduced. The alignment of the estimated impact from strategy-level data analysis using the dose methodology and the data from the population-level survey suggest that dose is useful for conducting real-time evaluation of multiple heterogeneous strategies, and as a viable proxy for existing population-level surveys when robust strategy-level evaluation data are collected. This article is part of a supplement entitled Building Thriving Communities Through Comprehensive Community Health Initiatives, which is sponsored by Kaiser Permanente, Community Health. Copyright © 2018 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  2. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  3. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  4. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  5. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  6. Hepatitis C co-infection is associated with an increased risk of incident chronic kidney disease in HIV-infected patients initiating combination antiretroviral therapy.

    PubMed

    Rossi, Carmine; Raboud, Janet; Walmsley, Sharon; Cooper, Curtis; Antoniou, Tony; Burchell, Ann N; Hull, Mark; Chia, Jason; Hogg, Robert S; Moodie, Erica E M; Klein, Marina B

    2017-04-04

    Combination antiretroviral therapy (cART) has reduced mortality from AIDS-related illnesses and chronic comorbidities have become prevalent among HIV-infected patients. We examined the association between hepatitis C virus (HCV) co-infection and chronic kidney disease (CKD) among patients initiating modern antiretroviral therapy. Data were obtained from the Canadian HIV Observational Cohort for individuals initiating cART from 2000 to 2012. Incident CKD was defined as two consecutive serum creatinine-based estimated glomerular filtration (eGFR) measurements <60 mL/min/1.73m 2 obtained ≥3 months apart. CKD incidence rates after cART initiation were compared between HCV co-infected and HIV mono-infected patients. Hazard ratios (HRs) and 95% confidence intervals (CIs) were estimated using multivariable Cox regression. We included 2595 HIV-infected patients with eGFR >60 mL/min/1.73m 2 at cART initiation, of which 19% were HCV co-infected. One hundred and fifty patients developed CKD during 10,903 person-years of follow-up (PYFU). The CKD incidence rate was higher among co-infected than HIV mono-infected patients (26.0 per 1000 PYFU vs. 10.7 per 1000 PYFU). After adjusting for demographics, virologic parameters and traditional CKD risk factors, HCV co-infection was associated with a significantly shorter time to incident CKD (HR 1.97; 95% CI: 1.33, 2.90). Additional factors associated with incident CKD were female sex, increasing age after 40 years, lower baseline eGFR below 100 mL/min/1.73m 2 , increasing HIV viral load and cumulative exposure to tenofovir and lopinavir. HCV co-infection was associated with an increased risk of incident CKD among HIV-infected patients initiating cART. HCV-HIV co-infected patients should be monitored for kidney disease and may benefit from available HCV treatments.

  7. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  8. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  9. Estimating mangrove in Florida: trials monitoring rare ecosystems

    Treesearch

    Mark J. Brown

    2015-01-01

    Mangrove species are keystone components in coastal ecosystems and are the interface between forest land and sea. Yet, estimates of their area have varied widely. Forest Inventory and Analysis (FIA) data from ground-based sample plots provide one estimate of the resource. Initial FIA estimates of the mangrove resource in Florida varied dramatically from those compiled...

  10. Estimating household and community transmission of ocular Chlamydia trachomatis.

    PubMed

    Blake, Isobel M; Burton, Matthew J; Bailey, Robin L; Solomon, Anthony W; West, Sheila; Muñoz, Beatriz; Holland, Martin J; Mabey, David C W; Gambhir, Manoj; Basáñez, María-Gloria; Grassly, Nicholas C

    2009-01-01

    Community-wide administration of antibiotics is one arm of a four-pronged strategy in the global initiative to eliminate blindness due to trachoma. The potential impact of more efficient, targeted treatment of infected households depends on the relative contribution of community and household transmission of infection, which have not previously been estimated. A mathematical model of the household transmission of ocular Chlamydia trachomatis was fit to detailed demographic and prevalence data from four endemic populations in The Gambia and Tanzania. Maximum likelihood estimates of the household and community transmission coefficients were obtained. The estimated household transmission coefficient exceeded both the community transmission coefficient and the rate of clearance of infection by individuals in three of the four populations, allowing persistent transmission of infection within households. In all populations, individuals in larger households contributed more to the incidence of infection than those in smaller households. Transmission of ocular C. trachomatis infection within households is typically very efficient. Failure to treat all infected members of a household during mass administration of antibiotics is likely to result in rapid re-infection of that household, followed by more gradual spread across the community. The feasibility and effectiveness of household targeted strategies should be explored.

  11. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  12. An Evaluation of Residual Feed Intake Estimates Obtained with Computer Models Versus Empirical Regression

    USDA-ARS?s Scientific Manuscript database

    Data on individual daily feed intake, bi-weekly BW, and carcass composition were obtained on 1,212 crossbred steers, in Cycle VII of the Germplasm Evaluation Project at the U.S. Meat Animal Research Center. Within animal regressions of cumulative feed intake and BW on linear and quadratic days on fe...

  13. Skipping Strategy (SS) for Initial Population of Job-Shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Abdolrazzagh-Nezhad, M.; Nababan, E. B.; Sarim, H. M.

    2018-03-01

    Initial population in job-shop scheduling problem (JSSP) is an essential step to obtain near optimal solution. Techniques used to solve JSSP are computationally demanding. Skipping strategy (SS) is employed to acquire initial population after sequence of job on machine and sequence of operations (expressed in Plates-jobs and mPlates-jobs) are determined. The proposed technique is applied to benchmark datasets and the results are compared to that of other initialization techniques. It is shown that the initial population obtained from the SS approach could generate optimal solution.

  14. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Astrophysics Data System (ADS)

    Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.

    2016-12-01

    The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.

  15. Initial retrieval sequence and blending strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemwell, D.L.; Grenard, C.E.

    1996-09-01

    This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.

  16. Making Initial Earthquake Catalogs from a Temporary Seismic Network for Monitoring Aftershocks

    NASA Astrophysics Data System (ADS)

    Park, J.; Kang, T. S.; Kim, K. H.; Rhie, J.; Kim, Y.

    2017-12-01

    The ML 5.1 foreshock and the ML 5.8 mainshock earthquakes occurred consecutively in Gyeongju, the southeastern part of the Korean Peninsula, on September 12, 2016. A temporary seismic network was installed quickly to observe aftershocks followed this mainshock event in the vicinity of the epicenter. The network was consisting of 27 stations equipped with broadband sensors initially and it has been operated in off-line system which required a periodic manual backup of the recorded data. We detected P-triggers and associated events by using SeisComP3 to make an initial catalogue of aftershock events rapidly. If necessary, manual picking was performed to obtain precise P- and S-arrival times from a module, scolv, included in SeisComP3. For cross-checking of reliable identification of seismic phases, a seismic python package, PhasePApy, was applied in parallel with SeisComP3. Then we get the precise relocated coordinates and depth of the aftershock events using the velellipse algorithm. The resulting dataset comprises of an initial aftershock catalog. The catalog will provide the means to address some important questions and issues on seismogenesis in this intraplate seismicity region including the 2016 Gyeongju earthquake sequence and to improve seismic hazard estimation of the region.

  17. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  18. Crack initiation modeling of a directionally-solidified nickel-base superalloy

    NASA Astrophysics Data System (ADS)

    Gordon, Ali Page

    crystal plasticity model was used to simulate the material behavior in the L and T orientations. The constitutive model was implemented in ABAQUS and a parameter estimation scheme was developed to obtain the material constants. A physically-based model was developed for correlating crack initiation life based on the experimental life data and predictions are made using the crack initiation model. Assuming a unique relationship between the damage fraction and cycle fraction with respect to cycles to crack initiation for each damage mode, the total crack initiation life has been represented in terms of the individual damage components (fatigue, creep-fatigue, creep, and oxidation-fatigue) observed at the end state of crack initiation.

  19. Estimating cell populations

    NASA Technical Reports Server (NTRS)

    White, B. S.; Castleman, K. R.

    1981-01-01

    An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.

  20. Spring Small Grains Area Estimation

    NASA Technical Reports Server (NTRS)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  1. Project Cost Estimation for Planning

    DOT National Transportation Integrated Search

    2010-02-26

    For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...

  2. Two-Stage Parameter Estimation in Confined Costal Aquifers

    NASA Astrophysics Data System (ADS)

    Hsu, N.

    2003-12-01

    Using field observations of tidal level and piezometric head at an observation well, this research develops a two-stage parameter estimation approach for estimating the hydraulic conductivity (T) and storage coefficient (S) of a confined aquifer in a costal area. While the y-axis coincides with the coastline, the x-axis extends from zero to infinity and, therefore, the domain of the aquifer is assumed to be a half plane. Other assumptions include homogeneity, isotropy and constant thickness of the aquifer, and zero initial head distribution. In the first stage, fluctuations of the tidal level and piezometric head at the observation well are collected simultaneously without the influence of pumping. Fourier spectra analysis is used to find the autocorrelation and crosscorrelation of the two sets of observations as well as the phase vs. frequency function. The tidal efficiency and time delay can then be computed. The analytical solution of Ferris (1951) is then used to compute the ratio of T/S. In the second stage, the system is stressed with pumping and observations of the tidal level and piezometric head at the observation well are collected simultaneously. The effect of tide to the observation well without pumping can be computed by the analytical solution of Ferris (1951) based upon the identified ratio of T/S and is deducted from the piezometric head observations to obtain the updated piezometric head. Theis equation coupled with the method of image is then applied to the updated piezometric head to obtain the T and S values. The developed approach is applied to a hypothetical aquifer. The results obtained show convergence of the approach. The robustness of the developed approach is also demonstrated by using noise-corrupted observations.

  3. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    PubMed

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  4. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the

  5. Method for solving the problem of nonlinear heating a cylindrical body with unknown initial temperature

    NASA Astrophysics Data System (ADS)

    Yaparova, N.

    2017-10-01

    We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.

  6. Cortical thickness measurement from magnetic resonance images using partial volume estimation

    NASA Astrophysics Data System (ADS)

    Zuluaga, Maria A.; Acosta, Oscar; Bourgeat, Pierrick; Hernández Hoyos, Marcela; Salvado, Olivier; Ourselin, Sébastien

    2008-03-01

    Measurement of the cortical thickness from 3D Magnetic Resonance Imaging (MRI) can aid diagnosis and longitudinal studies of a wide range of neurodegenerative diseases. We estimate the cortical thickness using a Laplacian approach whereby equipotentials analogous to layers of tissue are computed. The thickness is then obtained using an Eulerian approach where partial differential equations (PDE) are solved, avoiding the explicit tracing of trajectories along the streamlines gradient. This method has the advantage of being relatively fast and insure unique correspondence points between the inner and outer boundaries of the cortex. The original method is challenged when the thickness of the cortex is of the same order of magnitude as the image resolution since partial volume (PV) effect is not taken into account at the gray matter (GM) boundaries. We propose a novel way to take into account PV which improves substantially accuracy and robustness. We model PV by computing a mixture of pure Gaussian probability distributions and use this estimate to initialize the cortical thickness estimation. On synthetic phantoms experiments, the errors were divided by three while reproducibility was improved when the same patients was scanned three consecutive times.

  7. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  8. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  9. A stochastic approach to quantifying the blur with uncertainty estimation for high-energy X-ray imaging systems

    DOE PAGES

    Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; ...

    2015-06-03

    One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less

  10. Estimating surface hardening profile of blank for obtaining high drawing ratio in deep drawing process using FE analysis

    NASA Astrophysics Data System (ADS)

    Tan, C. J.; Aslian, A.; Honarvar, B.; Puborlaksono, J.; Yau, Y. H.; Chong, W. T.

    2015-12-01

    We constructed an FE axisymmetric model to simulate the effect of partially hardened blanks on increasing the limiting drawing ratio (LDR) of cylindrical cups. We partitioned an arc-shaped hard layer into the cross section of a DP590 blank. We assumed the mechanical property of the layer is equivalent to either DP980 or DP780. We verified the accuracy of the model by comparing the calculated LDR for DP590 with the one reported in the literature. The LDR for the partially hardened blank increased from 2.11 to 2.50 with a 1 mm depth of DP980 ring-shaped hard layer on the top surface of the blank. The position of the layer changed with drawing ratios. We proposed equations for estimating the inner and outer diameters of the layer, and tested its accuracy in the simulation. Although the outer diameters fitted in well with the estimated line, the inner diameters are slightly less than the estimated ones.

  11. 20 CFR 404.810 - How to obtain a statement of earnings and a benefit estimate statement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records at the time of the request. If you have a social security number and have wages or net earnings... prescribed form, giving us your name, social security number, date of birth, and sex. You, your authorized... benefit estimate statement. 404.810 Section 404.810 Employees' Benefits SOCIAL SECURITY ADMINISTRATION...

  12. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    PubMed

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  13. Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Sigeaki; Toyooka, Satoru

    1995-10-01

    The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.

  14. Estimating rice yield from MODIS-Landsat fusion data in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. R.; Chen, C. F.; Nguyen, S. T.

    2017-12-01

    Rice production monitoring with remote sensing is an important activity in Taiwan due to official initiatives. Yield estimation is a challenge in Taiwan because rice fields are small and fragmental. High spatiotemporal satellite data providing phenological information of rice crops is thus required for this monitoring purpose. This research aims to develop data fusion approaches to integrate daily Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat data for rice yield estimation in Taiwan. In this study, the low-resolution MODIS LST and emissivity data are used as reference data sources to obtain the high-resolution LST from Landsat data using the mixed-pixel analysis technique, and the time-series EVI data were derived the fusion of MODIS and Landsat spectral band data using STARFM method. The LST and EVI simulated results showed the close agreement between the LST and EVI obtained by the proposed methods with the reference data. The rice-yield model was established using EVI and LST data based on information of rice crop phenology collected from 371 ground survey sites across the country in 2014. The results achieved from the fusion datasets compared with the reference data indicated the close relationship between the two datasets with the correlation coefficient (R2) of 0.75 and root mean square error (RMSE) of 338.7 kgs, which were more accurate than those using the coarse-resolution MODIS LST data (R2 = 0.71 and RMSE = 623.82 kgs). For the comparison of total production, 64 towns located in the west part of Taiwan were used. The results also confirmed that the model using fusion datasets produced more accurate results (R2 = 0.95 and RMSE = 1,243 tons) than that using the course-resolution MODIS data (R2 = 0.91 and RMSE = 1,749 tons). This study demonstrates the application of MODIS-Landsat fusion data for rice yield estimation at the township level in Taiwan. The results obtained from the methods used in this study could be useful to policymakers

  15. Economic cost of initial attack and large-fire suppression

    Treesearch

    Armando González-Cabán

    1983-01-01

    A procedure has been developed for estimating the economic cost of initial attack and large-fire suppression. The procedure uses a per-unit approach to estimate total attack and suppression costs on an input-by-input basis. Fire management inputs (FMIs) are the production units used. All direct and indirect costs are charged to the FMIs. With the unit approach, all...

  16. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  17. Filtering observations without the initial guess

    NASA Astrophysics Data System (ADS)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the

  18. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  19. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  20. Investigation of Properties of Nanocomposite Polyimide Samples Obtained by Fused Deposition Modeling

    NASA Astrophysics Data System (ADS)

    Polyakov, I. V.; Vaganov, G. V.; Yudin, V. E.; Ivan'kova, E. M.; Popova, E. N.; Elokhovskii, V. Yu.

    2018-03-01

    Nanomodified polyimide samples were obtained by fused deposition modeling (FDM) using an experimental setup for 3D printing of highly heat-resistant plastics. The mechanical properties and structure of these samples were studied by viscosimetry, differential scanning calorimetry, and scanning electron microscopy. A comparative estimation of the mechanical properties of laboratory samples obtained from a nanocomposite based on heat-resistant polyetherimide by FDM and injection molding is presented.

  1. Concentration history during pumping from a leaky aquifer with stratified initial concentration

    USGS Publications Warehouse

    Goode, Daniel J.; Hsieh, Paul A.; Shapiro, Allen M.; Wood, Warren W.; Kraemer, Thomas F.

    1993-01-01

    Analytical and numerical solutions are employed to examine the concentration history of a dissolved substance in water pumped from a leaky aquifer. Many aquifer systems are characterized by stratification, for example, a sandy layer overlain by a clay layer. To obtain information about separate hydrogeologic units, aquifer pumping tests are often conducted with a well penetrating only one of the layers. When the initial concentration distribution is also stratified (the concentration varies with elevation only), the concentration breakthrough in the pumped well may be interpreted to provide information on aquifer hydraulic and transport properties. To facilitate this interpretation, we present some simple analytical and numerical solutions for limiting cases and illustrate their application to a fractured bedrock/glacial drift aquifer system where the solute of interest is dissolved radon gas. In addition to qualitative information on water source, this method may yield estimates of effective porosity and saturated thickness (or fracture transport aperture) from a single-hole test. Little information about dispersivity is obtained because the measured concentration is not significantly affected by dispersion in the aquifer.

  2. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  3. Simulation of the induction of oxidation of low-density lipoprotein by high copper concentrations: evidence for a nonconstant rate of initiation.

    PubMed

    Abuja, P M; Albertini, R; Esterbauer, H

    1997-06-01

    Kinetic simulation can help obtain deeper insight into the molecular mechanisms of complex processes, such as lipid peroxidation (LPO) in low-density lipoprotein (LDL). We have previously set up a single-compartment model of this process, initiating with radicals generated externally at a constant rate to show the interplay of radical scavenging and chain propagation. Here we focus on the initiating events, substituting constant rate of initiation (Ri) by redox cycling of Cu2+ and Cu+. Our simulation reveals that early events in copper-mediated LDL oxidation include (1) the reduction of Cu2+ by tocopherol (TocOH) which generates tocopheroxyl radical (TocO.), (2) the fate of TocO. which either is recycled or recombines with lipid peroxyl radical (LOO.), and (3) the reoxidation of Cu+ by lipid hydroperoxide which results in alkoxyl radical (LO.) formation. So TocO., LOO., and LO. can be regarded as primordial radicals, and the sum of their formation rates is the total rate of initiation, Ri. As experimental information of these initiating events cannot be obtained experimentally, the whole model was validated experimentally by comparison of LDL oxidation in the presence and absence of bathocuproine as predicted by simulation. Simulation predicts that Ri decreases by 2 orders of magnitude during lag time. This has important consequences for the estimation of oxidation resistance in copper-mediated LDL oxidation: after consumption of tocopherol, even small amounts of antioxidants may prolong the lag phase for a considerable time.

  4. Support to LANL: Cost estimation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report summarizes the activities and progress by ICF Kaiser Engineers conducted on behalf of Los Alamos National Laboratories (LANL) for the US Department of Energy, Office of Waste Management (EM-33) in the area of improving methods for Cost Estimation. This work was conducted between October 1, 1992 and September 30, 1993. ICF Kaiser Engineers supported LANL in providing the Office of Waste Management with planning and document preparation services for a Cost and Schedule Estimating Guide (Guide). The intent of the Guide was to use Activity-Based Cost (ABC) estimation as a basic method in preparing cost estimates for DOEmore » planning and budgeting documents, including Activity Data Sheets (ADSs), which form the basis for the Five Year Plan document. Prior to the initiation of the present contract with LANL, ICF Kaiser Engineers was tasked to initiate planning efforts directed toward a Guide. This work, accomplished from June to September, 1992, included visits to eight DOE field offices and consultation with DOE Headquarters staff to determine the need for a Guide, the desired contents of a Guide, and the types of ABC estimation methods and documentation requirements that would be compatible with current or potential practices and expertise in existence at DOE field offices and their contractors.« less

  5. Temporal variability patterns in solar radiation estimations

    NASA Astrophysics Data System (ADS)

    Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.

    2016-06-01

    In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.

  6. Daughter-Initiated Cancer Screening Appeals to Mothers.

    PubMed

    Mosavel, M; Genderson, M W

    2016-12-01

    Youth-initiated health interventions may provide a much needed avenue for intergenerational dissemination of health information among families who bear the greatest burden from unequal distribution of morbidity and mortality. The findings presented in this paper are from a pilot study of the feasibility and impact of female youth-initiated messages (mostly daughters) encouraging adult female relatives (mostly mothers) to obtain cancer screening within low-income African American families living in a Southern US state. Results are compared between an intervention and control group. Intervention group youth (n = 22) were exposed to a 60-min interactive workshop where they were assisted to prepare a factual and emotional appeal to their adult relative to obtain specific screening. The face-to-face workshops were guided by the Elaboration Likelihood Model (ELM) and the Theory of Planned Behavior (TPB). Control group girls (n = 18) were only provided with a pamphlet with information about cancer screening and specific steps about how to encourage their relative to obtain screening. Intervention youth (86 %) and adults (82 %) reported that the message was shared while 71 % in the control group reported sharing or receiving the message. Importantly, more women in the intervention group reported that they obtained a screen (e.g., mammogram, Pap smear) directly based on the youth's appeal. These findings can have major implications for youth-initiated health promotion efforts, especially among hard-to-reach populations.

  7. Daughter-Initiated Cancer Screening Appeals to Mothers

    PubMed Central

    Mosavel, Maghboeba; Genderson, Maureen Wilson

    2015-01-01

    Youth-initiated health interventions may provide a much needed avenue for intergenerational dissemination of health information among families who bear the greatest burden from unequal distribution of morbidity and mortality. The findings presented in this paper are from a pilot study of the feasibility and impact of female youth-initiated messages (mostly daughters) encouraging adult female relatives (mostly mothers) to obtain cancer screening within low income African American families living in a Southern US state. Results are compared between an intervention and control group. Intervention group youth (n=22) were exposed to a 60-minute interactive workshop where they were assisted to prepare a factual and emotional appeal to their adult relative to obtain specific screening. The face-to-face workshops were guided by the Elaboration Likelihood Model (ELM) and the Theory of Planned Behavior (TPB). Control group girls (n=18) were only provided with a pamphlet with information about cancer screening and specific steps about how to encourage their relative to obtain screening. Intervention youth (86%) and adults (82%) reported that the message was shared while 71% in the control group reported sharing or receiving the message. Importantly, more women in the intervention group reported that they obtained a screen (e.g., mammogram, Pap smear) directly based on the youth's appeal. These findings can have major implications for youth-initiated health promotion efforts, especially among hard-to-reach populations. PMID:26590969

  8. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  9. Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations

    NASA Astrophysics Data System (ADS)

    Carrassi, A.; Weber, R. J. T.; Guemas, V.; Doblas-Reyes, F. J.; Asif, M.; Volpi, D.

    2014-04-01

    performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed. Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year. Finally, the application of these results in the context of realistic climate models is discussed.

  10. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  11. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  12. Estimating evolutionary rates in giant viruses using ancient genomes

    PubMed Central

    Duchêne, Sebastián

    2018-01-01

    Abstract Pithovirus sibericum is a giant (610 Kpb) double-stranded DNA virus discovered in a purportedly 30,000-year-old permafrost sample. A closely related virus, Pithovirus massiliensis, was recently isolated from a sewer in southern France. An initial comparison of these two virus genomes assumed that P. sibericum was directly ancestral to P. massiliensis and gave a maximum evolutionary rate of 2.60 × 10−5 nucleotide substitutions per site per year (subs/site/year). If correct, this would make pithoviruses among the fastest-evolving DNA viruses, with rates close to those seen in some RNA viruses. To help determine whether this unusually high rate is accurate we utilized the well-known negative association between evolutionary rate and genome size in DNA microbes. This revealed that a more plausible rate estimate for Pithovirus evolution is ∼2.23 × 10−6 subs/site/year, with even lower estimates obtained if evolutionary rates are assumed to be time-dependent. Hence, we estimate that Pithovirus has evolved at least an order of magnitude more slowly than previously suggested. We then used our new rate estimates to infer a time-scale for Pithovirus evolution. Strikingly, this suggests that these viruses could have diverged at least hundreds of thousands of years ago, and hence have evolved over longer time-scales than previously suggested. We propose that the evolutionary rate and time-scale of pithovirus evolution should be reconsidered in the light of these observations and that future estimates of the rate of giant virus evolution should be carefully examined in the context of their biological plausibility. PMID:29511572

  13. SURE Estimates for a Heteroscedastic Hierarchical Model

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.

    2014-01-01

    Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976

  14. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  15. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    NASA Astrophysics Data System (ADS)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  16. Novel kinetic spectrophotometric method for estimation of certain biologically active phenolic sympathomimetic drugs in their bulk powders and different pharmaceutical formulations

    NASA Astrophysics Data System (ADS)

    Omar, Mahmoud A.; Badr El-Din, Khalid M.; Salem, Hesham; Abdelmageed, Osama H.

    2018-03-01

    A simple, selective and sensitive kinetic spectrophotometric method was described for estimation of four phenolic sympathomimetic drugs namely; terbutaline sulfate, fenoterol hydrobromide, isoxsuprine hydrochloride and etilefrine hydrochloride. This method is depended on the oxidation of the phenolic drugs with Folin-Ciocalteu reagent in presence of sodium carbonate. The rate of color development at 747-760 nm was measured spectrophotometrically. The experimental parameters controlling the color development were fully studied and optimized. The reaction mechanism for color development was proposed. The calibration graphs for both the initial rate and fixed time methods were constructed, where linear correlations were found in the general concentration ranges of 3.65 × 10- 6-2.19 × 10- 5 mol L- 1 and 2-24.0 μg mL- 1 with correlation coefficients in the following range 0.9992-0.9999, 0.9991-0.9998 respectively. The limits of detection and quantitation for the initial rate and fixed time methods were found to be in general concentration range 0.109-0.273, 0.363-0.910 and 0.210-0.483, 0.700-1.611 μg mL- 1 respectively. The developed method was validated according to ICH and USP 30 -NF 25 guidelines. The suggested method was successfully implemented to the estimation of these drugs in their commercial pharmaceutical formulations and the recovery percentages obtained were ranged from 97.63% ± 1.37 to 100.17% ± 0.95 and 97.29% ± 0.74 to 100.14 ± 0.81 for initial rate and fixed time methods respectively. The data obtained from the analysis of dosage forms were compared with those obtained by reported methods. Statistical analysis of these results indicated no significant variation in the accuracy and precision of both the proposed and reported methods.

  17. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  18. Disaster debris estimation using high-resolution polarimetric stereo-SAR

    NASA Astrophysics Data System (ADS)

    Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki

    2016-10-01

    This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.

  19. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  20. Ultraspectral sounding retrieval error budget and estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping

    2011-11-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).

  1. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  2. Estimation of teleported and gained parameters in a non-inertial frame

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2017-04-01

    Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.

  3. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  4. Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria

    NASA Astrophysics Data System (ADS)

    Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.

    2009-04-01

    Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and

  5. Network support for system initiated checkpoints

    DOEpatents

    Chen, Dong; Heidelberger, Philip

    2013-01-29

    A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.

  6. Atmospheric Turbulence Estimates from a Pulsed Lidar

    NASA Technical Reports Server (NTRS)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  7. CTER-rapid estimation of CTF parameters with error assessment.

    PubMed

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  9. A hierarchical estimator development for estimation of tire-road friction coefficient.

    PubMed

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  10. A hierarchical estimator development for estimation of tire-road friction coefficient

    PubMed Central

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332

  11. Estimation of the Young's modulus of the human pars tensa using in-situ pressurization and inverse finite-element analysis.

    PubMed

    Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M

    2017-03-01

    Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Estimating Canopy Dark Respiration for Crop Models

    NASA Technical Reports Server (NTRS)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  13. Free vibration of rectangular plates with a small initial curvature

    NASA Technical Reports Server (NTRS)

    Adeniji-Fashola, A. A.; Oyediran, A. A.

    1988-01-01

    The method of matched asymptotic expansions is used to solve the transverse free vibration of a slightly curved, thin rectangular plate. Analytical results for natural frequencies and mode shapes are presented in the limit when the dimensionless bending rigidity, epsilon, is small compared with in-plane forces. Results for different boundary conditions are obtained when the initial deflection is: (1) a polynomial in both directions, and (2) the product of a polynomial and a trigonometric function, and arbitrary. For the arbitrary initial deflection case, the Fourier series technique is used to define the initial deflection. The results obtained show that the natural frequencies of vibration of slightly curved plates are coincident with those of perfectly flat, prestressed rectangular plates. However, the eigenmodes are very different from those of initially flat prestressed rectangular plates. The total deflection is found to be the sum of the initial deflection, the deflection resulting from the solution of the flat plate problem, and the deflection resulting from the static problem.

  14. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (

  15. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  16. Estimating irradiated nuclear fuel characteristics by nonlinear multivariate regression of simulated gamma-ray emissions

    NASA Astrophysics Data System (ADS)

    Åberg Lindell, M.; Andersson, P.; Grape, S.; Håkansson, A.; Thulin, M.

    2018-07-01

    In addition to verifying operator declared parameters of spent nuclear fuel, the ability to experimentally infer such parameters with a minimum of intrusiveness is of great interest and has been long-sought after in the nuclear safeguards community. It can also be anticipated that such ability would be of interest for quality assurance in e.g. recycling facilities in future Generation IV nuclear fuel cycles. One way to obtain information regarding spent nuclear fuel is to measure various gamma-ray intensities using high-resolution gamma-ray spectroscopy. While intensities from a few isotopes obtained from such measurements have traditionally been used pairwise, the approach in this work is to simultaneously analyze correlations between all available isotopes, using multivariate analysis techniques. Based on this approach, a methodology for inferring burnup, cooling time, and initial fissile content of PWR fuels using passive gamma-ray spectroscopy data has been investigated. PWR nuclear fuels, of UOX and MOX type, and their gamma-ray emissions, were simulated using the Monte Carlo code Serpent. Data comprising relative isotope activities was analyzed with decision trees and support vector machines, for predicting fuel parameters and their associated uncertainties. From this work it may be concluded that up to a cooling time of twenty years, the 95% prediction intervals of burnup, cooling time and initial fissile content could be inferred to within approximately 7 MWd/kgHM, 8 months, and 1.4 percentage points, respectively. An attempt aiming to estimate the plutonium content in spent UOX fuel, using the developed multivariate analysis model, is also presented. The results for Pu mass estimation are promising and call for further studies.

  17. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  18. No correlation between ultrasound placental grading at 31-34 weeks of gestation and a surrogate estimate of organ function at term obtained by stereological analysis.

    PubMed

    Yin, T T; Loughna, P; Ong, S S; Padfield, J; Mayhew, T M

    2009-08-01

    We test the experimental hypothesis that early changes in the ultrasound appearance of the placenta reflect poor or reduced placental function. The sonographic (Grannum) grade of placental maturity was compared to placental function as expressed by the morphometric oxygen diffusive conductance of the villous membrane. Ultrasonography was used to assess the Grannum grade of 32 placentas at 31-34 weeks of gestation. Indications for the scans included a history of previous fetal abnormalities, previous fetal growth problems or suspicion of IUGR. Placentas were classified from grade 0 (most immature) to grade III (most mature). We did not exclude smokers or complicated pregnancies as we aimed to correlate the early appearance of mature placentas with placental function. After delivery, microscopical fields on formalin-fixed, trichrome-stained histological sections of each placenta were obtained by multistage systematic uniform random sampling. Using design-based stereological methods, the exchange surface areas of peripheral (terminal and intermediate) villi and their fetal capillaries and the arithmetic and harmonic mean thicknesses of the villous membrane (maternal surface of villous trophoblast to adluminal surface of vascular endothelium) were estimated. An index of the variability in thickness of this membrane, and an estimate of its oxygen diffusive conductance, were derived secondarily as were estimates of the mean diameters and total lengths of villi and fetal capillaries. Group comparisons were drawn using analysis of variance. We found no significant differences in placental volume or composition or in the dimensions or diffusive conductances of the villous membrane. Subsequent exclusion of smokers did not alter these main findings. Grannum grades at 31-34 weeks of gestation appear not to provide reliable predictors of the functional capacity of the term placenta as expressed by the surrogate measure, morphometric diffusive conductance.

  19. [A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].

    PubMed

    Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng

    2006-09-01

    The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.

  20. Estimation of energetic efficiency of heat supply in front of the aircraft at supersonic accelerated flight. Part 1. Mathematical models

    NASA Astrophysics Data System (ADS)

    Latypov, A. F.

    2008-12-01

    Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.

  1. An improved method to estimate reflectance parameters for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  2. Using Empirical Data to Estimate Potential Functions in Commodity Markets: Some Initial Results

    NASA Astrophysics Data System (ADS)

    Shen, C.; Haven, E.

    2017-12-01

    This paper focuses on estimating real and quantum potentials from financial commodities. The log returns of six common commodities are considered. We find that some phenomena, such as the vertical potential walls and the time scale issue of the variation on returns, also exists in commodity markets. By comparing the quantum and classical potentials, we attempt to demonstrate that the information within these two types of potentials is different. We believe this empirical result is consistent with the theoretical assumption that quantum potentials (when embedded into social science contexts) may contain some social cognitive or market psychological information, while classical potentials mainly reflect `hard' market conditions. We also compare the two potential forces and explore their relationship by simply estimating the Pearson correlation between them. The Medium or weak interaction effect may indicate that the cognitive system among traders may be affected by those `hard' market conditions.

  3. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  4. Silent Aircraft Initiative Concept Risk Assessment

    NASA Technical Reports Server (NTRS)

    Nickol, Craig L.

    2008-01-01

    A risk assessment of the Silent Aircraft Initiative's SAX-40 concept design for extremely low noise has been performed. A NASA team developed a list of 27 risk items, and evaluated the level of risk for each item in terms of the likelihood that the risk would occur and the consequences of the occurrence. The following risk items were identified as high risk, meaning that the combination of likelihood and consequence put them into the top one-fourth of the risk matrix: structures and weight prediction; boundary-layer ingestion (BLI) and inlet design; variable-area exhaust and thrust vectoring; displaced-threshold and continuous descent approach (CDA) operational concepts; cost; human factors; and overall noise performance. Several advanced-technology baseline concepts were created to serve as a basis for comparison to the SAX-40 concept. These comparisons indicate that the SAX-40 would have significantly greater research, development, test, and engineering (RDT&E) and production costs than a conventional aircraft with similar technology levels. Therefore, the cost of obtaining the extremely low noise capability that has been estimated for the SAX-40 is significant. The SAX-40 concept design proved successful in focusing attention toward low noise technologies and in raising public awareness of the issue.

  5. Mercury emission estimates from fires: an initial inventory for the United States.

    PubMed

    Wiedinmyer, Christine; Friedli, Hans

    2007-12-01

    Recent studies have shown that emissions of mercury (Hg), a hazardous air pollutant, from fires can be significant. However, to date, these emissions have not been well-quantified for the entire United States. Daily emissions of Hg from fires in the lower 48 states of the United States (LOWER48) and in Alaska were estimated for 2002-2006 using a simple fire emissions model. Emission factors of Hg from fires in different ecosystems were compiled from published plume studies and from soil-based assessments. Annual averaged emissions of Hg from fires in the LOWER48 and Alaska were 44 (20-65) metric tons yr(-1), equivalent to approximately 30% of the U.S. EPA 2002 National Emissions Inventory for Hg. Alaska had the highest averaged monthly emissions of all states; however, the emissions have a high temporal variability. Emissions from forests dominate the inventory, suggesting that Hg emissions from agricultural fires are not significant on an annual basis. The uncertainty in the Hg emission factors due to limited data leads to an uncertainty in the emission estimates on the order of +/-50%. Research is still needed to better constrain Hg emission factors from fires, particularly in the eastern U.S. and for ecosystems other than forests.

  6. Analysis of short pulse laser altimetry data obtained over horizontal path

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Tsai, B. M.; Gardner, C. S.

    1983-01-01

    Recent pulsed measurements of atmospheric delay obtained by ranging to the more realistic targets including a simulated ocean target and an extended plate target are discussed. These measurements are used to estimate the expected timing accuracy of a correlation receiver system. The experimental work was conducted using a pulsed two color laser altimeter.

  7. Initial Estimates of Optical Constants of Mars Candidate Materials

    NASA Technical Reports Server (NTRS)

    Rousch, Ted L.; Brown, Adrian Jon; Bishop, Janice L.; Blake, David F.; Bristow, Thomas F.

    2013-01-01

    Data obtained at visible and near-infrared wavelengths by OMEGA on Mars Express and CRISM on MRO provide definitive evidence for the presence of phyllosilicates and other hydrated phases on Mars. A diverse range of both Fe/Mg-OH and Al- OH-bearing phyllosilicates were identified including the smectites, nontronite, saponite, and montmorillonite. To constrain the abundances of these phyllosilicates, spectral analyses of mixtures are needed. We report on our effort to enable the quantitative evaluation of the abundance of hydrated-hydroxylated silicates when they are contained in mixtures. We include two component mixtures of hydrated/ hydroxylated silicates with each other and with two analogs for other Martian materials; pyroxene (enstatite) and palagonitic soil (an alteration product of basaltic glass, hereafter referred to as palagonite). For the hydrated-hydroxylated silicates we include saponite and montmorillonite (Mg- and Al-rich smectites). We prepared three size separates of each end-member for study: 20-45, 63-90, and 125-150 micron.

  8. Shock Initiation and Equation of State of Ammonium Nitrate

    NASA Astrophysics Data System (ADS)

    Robbins, David; Sheffield, Steve; Dattelbaum, Dana; Chellappa, Raja; Velisavljevic, Nenad

    2013-06-01

    Ammonium nitrate (AN) is a widely used fertilizer and mining explosive commonly found in ammonium nitrate-fuel oil. Neat AN is a non-ideal explosive with measured detonation velocities approaching 4 km/s. Previously, we reported a thermodynamically-complete equation of state for AN based on its maximum density, and showed that near-full density AN did not initiate when subjected to shock input conditions up to 22 GPa. In this work, we extend these initial results, by presenting new Hugoniot data for intermediate density neat AN obtained from gas gun-driven plate impact experiments. AN at densities from 1.8 to 1.5 g/cm3 were impacted into LiF windows using a two-stage light gas gun. Dual VISARs were used to measure the interfacial particle velocity wave profile as a function of time following impact. The new Hugoniot data, in addition to updates to thermodynamic parameters derived from structural analysis and vibrational spectroscopy measurements in high pressure diamond anvil cell experiments, are used to refine the unreacted EOS for AN. Furthermore, shock initiation of neat AN was observed as the initial porosity increased (density decreased). Insights into the relationship(s) between initial density and shock initiation sensitivity are also presented, from evidence of shock initiation in the particle velocity profiles obtained for the lower density AN samples.

  9. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  10. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    PubMed

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  11. Automatic corn-soybean classification using Landsat MSS data. I - Near-harvest crop proportion estimation. II - Early season crop proportion estimation

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.

    1984-01-01

    The techniques used initially for the identification of cultivated crops from Landsat imagery depended greatly on the iterpretation of film products by a human analyst. This approach was not very effective and objective. Since 1978, new methods for crop identification are being developed. Badhwar et al. (1982) showed that multitemporal-multispectral data could be reduced to a simple feature space of alpha and beta and that these features would separate corn and soybean very well. However, there are disadvantages related to the use of alpha and beta parameters. The present investigation is concerned with a suitable method for extracting the required features. Attention is given to a profile model for crop discrimination, corn-soybean separation using profile parameters, and an automatic labeling (target recognition) method. The developed technique is extended to obtain a procedure which makes it possible to estimate the crop proportion of corn and soybean from Landsat data early in the growing season.

  12. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  13. Consultant management estimating tool : users' manual.

    DOT National Transportation Integrated Search

    2012-04-01

    The Switchboard is the opening form displayed to users. Use : the Switchboard to access the main functions of the estimating : tool. Double-click on a box to select the desired function. From : the Switchboard a user can initiate a search for project...

  14. An open source framework for tracking and state estimation ('Stone Soup')

    NASA Astrophysics Data System (ADS)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  15. Component Repair Times Obtained from MSPI Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eide, Steven A.; Cadwallader, Lee

    Information concerning times to repair or restore equipment to service given a failure is valuable to probabilistic risk assessments (PRAs). Examples of such uses in modern PRAs include estimation of the probability of failing to restore a failed component within a specified time period (typically tied to recovering a mitigating system before core damage occurs at nuclear power plants) and the determination of mission times for support system initiating event (SSIE) fault tree models. Information on equipment repair or restoration times applicable to PRA modeling is limited and dated for U.S. commercial nuclear power plants. However, the Mitigating Systems Performancemore » Index (MSPI) program covering all U.S. commercial nuclear power plants provides up-to-date information on restoration times for a limited set of component types. This paper describes the MSPI program data available and analyzes the data to obtain median and mean component restoration times as well as non-restoration cumulative probability curves. The MSPI program provides guidance for monitoring both planned and unplanned outages of trains of selected mitigating systems deemed important to safety. For systems included within the MSPI program, plants monitor both train UA and component unreliability (UR) against baseline values. If the combined system UA and UR increases sufficiently above established baseline results (converted to an estimated change in core damage frequency or CDF), a “white” (or worse) indicator is generated for that system. That in turn results in increased oversight by the US Nuclear Regulatory Commission (NRC) and can impact a plant’s insurance rating. Therefore, there is pressure to return MSPI program components to service as soon as possible after a failure occurs. Three sets of unplanned outages might be used to determine the component repair durations desired in this article: all unplanned outages for the train type that includes the component of interest, only

  16. Initial guidelines and estimates for a power system with inertial (flywheel) energy storage

    NASA Technical Reports Server (NTRS)

    Slifer, L. W., Jr.

    1980-01-01

    The starting point for the assessment of a spacecraft power system utilizing inertial (flywheel) energy storage. Both general and specific guidelines are defined for the assessment of a modular flywheel system, operationally similar to but with significantly greater capability than the multimission modular spacecraft (MMS) power system. Goals for the flywheel system are defined in terms of efficiently train estimates and mass estimates for the system components. The inertial storage power system uses a 5 kw-hr flywheel storage component at 50 percent depth of discharge (DOD). It is capable of supporting an average load of 3 kw, including a peak load of 7.5 kw for 10 percent of the duty cycle, in low earth orbit operation. The specific power goal for the system is 10 w/kg, consisting of a 56w/kg (end of life) solar array, a 21.7 w-hr/kg (at 50 percent DOD) flywheel, and 43 w/kg power processing (conditioning, control and distribution).

  17. What do parents know about their children's comprehension of emotions? accuracy of parental estimates in a community sample of pre-schoolers.

    PubMed

    Kårstad, S B; Kvello, O; Wichstrøm, L; Berg-Nielsen, T S

    2014-05-01

    Parents' ability to correctly perceive their child's skills has implications for how the child develops. In some studies, parents have shown to overestimate their child's abilities in areas such as IQ, memory and language. Emotion Comprehension (EC) is a skill central to children's emotion regulation, initially learned from their parents. In this cross-sectional study we first tested children's EC and then asked parents to estimate the child's performance. Thus, a measure of accuracy between child performance and parents' estimates was obtained. Subsequently, we obtained information on child and parent factors that might predict parents' accuracy in estimating their child's EC. Child EC and parental accuracy of estimation was tested by studying a community sample of 882 4-year-olds who completed the Test of Emotion Comprehension (TEC). The parents were instructed to guess their children's responses on the TEC. Predictors of parental accuracy of estimation were child actual performance on the TEC, child language comprehension, observed parent-child interaction, the education level of the parent, and child mental health. Ninety-one per cent of the parents overestimated their children's EC. On average, parents estimated that their 4-year-old children would display the level of EC corresponding to a 7-year-old. Accuracy of parental estimation was predicted by child high performance on the TEC, child advanced language comprehension, and more optimal parent-child interaction. Parents' ability to estimate the level of their child's EC was characterized by a substantial overestimation. The more competent the child, and the more sensitive and structuring the parent was interacting with the child, the more accurate the parent was in the estimation of their child's EC. © 2013 John Wiley & Sons Ltd.

  18. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  19. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  20. A new approach for estimating the Jupiter and Saturn gravity fields using Juno and Cassini measurements, trajectory estimation analysis, and a dynamical wind model optimization

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Durante, Daniele; Iess, Luciano; Kaspi, Yohai

    2017-04-01

    The ongoing Juno spacecraft measurements are improving our knowledge of Jupiter's gravity field. Similarly, the Cassini Grand Finale will improve the gravity estimate of Saturn. The analysis of the Juno and Cassini Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity fields of Jupiter and Saturn, additional information needs to be incorporated into the analysis, especially with regards to the planets' wind structures. In this work we propose a new iterative approach for the estimation of Jupiter and Saturn gravity fields, using simulated measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model is used to obtain the gravitational moments. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an optimization method, the likely penetration depth of the winds is computed, and its uncertainty is evaluated. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an estimate of their uncertainties, to be used as a priori for a new calculation of the gravity field. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that by using this method some of the gravitational moments are fitted better to the `observed' ones, mainly due to the added information from the dynamical model which includes the wind structure and its depth. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity moments estimated from the Juno and Cassini radio science experiments.

  1. Estimation in SEM: A Concrete Example

    ERIC Educational Resources Information Center

    Ferron, John M.; Hess, Melinda R.

    2007-01-01

    A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…

  2. Evaluation of the Current Techniques and Introduction of a Novel Approach for Estimating Maxillary Anterior Teeth Dimensions.

    PubMed

    Sayed, Mohammed E; Porwal, Amit; Al-Faraj, Nida A; Bajonaid, Amal M; Sumayli, Hassan A

    2017-07-01

    Several techniques and methods have been proposed to estimate the anterior teeth dimensions in edentulous patients. However, this procedure remains challenging especially when preextraction records are not available. Therefore, the purpose of this study is to evaluate some of the existing extraoral and intraoral methods for estimation of anterior tooth dimensions and to propose a novel method for estimation of central incisor width (CIW) and length (CIL) for Saudi population. Extraoral and intraoral measurements were recorded for a total of 236 subjects. Descriptive statistical analysis and Pearson's correlation tests were performed. Association was evaluated between combined anterior teeth width (CATW) and interalar width (IAW), intercommisural width (ICoW) and interhamular notch distance (IHND) plus 10 mm. Evaluation of the linear relationship between central incisor length (CIL) with facial height (FH) and CIW with bizygomatic width (BZW) was also performed. Significant correlation was found between the CATW and ICoW and IAW (p-values <0.0001); however, no correlation was found relative to IHND plus 10 mm (p-value = 0.456). Further, no correlation was found between the FH and right CIL and BZW and right CIW (p-values = 0.255 and 0.822). The means of CIL, CIW, incisive papillae-fovea palatinae (IP-FP), and IHND were used to estimate the central incisor dimensions: CIL = FP-IP distance/4.45, CIW = IHND/4.49. It was concluded that the ICoW and IAW measurements are the only predictable methods to estimate the initial reference value for CATW. A proposed intraoral approach was hypothesized for estimation of CIW and CIL for the given population. Based on the results of the study, ICoW and IAW measurements can be useful in estimating the initial reference value for CATW, while the proposed novel approach using specific palatal dimensions can be used for estimating the width and length of central incisors. These methods are crucial to obtain esthetic treatment results

  3. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  4. LACIE large area acreage estimation. [United States of America

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)

    1979-01-01

    A sample wheat acreage for a large area is obtained by multiplying its small grains acreage estimate as computed by the classification and mensuration subsystem by the best available ratio of wheat to small grains acreages obtained from historical data. In the United States, as in other countries with detailed historical data, an additional level of aggregation was required because sample allocation was made at the substratum level. The essential features of the estimation procedure for LACIE countries are included along with procedures for estimating wheat acreage in the United States.

  5. Smoking Initiation and the Iron Law of Demand *

    PubMed Central

    Lillard, Dean R.; Molloy, Eamon; Sfekas, Andrew

    2012-01-01

    We show, with three longitudinal datasets, that cigarette taxes and prices affect smoking initiation decisions. Previous longitudinal studies have found somewhat mixed results, but generally have not found initiation to be sensitive to increases in price or tax. We show that the lack of statistical significance in previous studies may be at least partially attributed to a lack of policy variation in the time periods studied, truncated behavioral windows, or mis-assignment of price and tax rates in retrospective data (which occurs when one has no information about respondents’ prior state or region of residence in retrospective data). We show how each factor may affect the estimation of initiation models. Our findings suggest several problems that are applicable to initiation behavior generally, particularly those for which individuals’ responses to policy changes may be noisy or small in magnitude. PMID:23220458

  6. Transcriptional response of soybean suspension-cultured cells induced by Nod factors obtained from Bradyrhizobium japonicum USDA110.

    PubMed

    Hakoyama, Tsuneo; Yokoyama, Tadashi; Kouchi, Hiroshi; Tsuchiya, Ken-ichi; Kaku, Hisatoshi; Arima, Yasuhiro

    2002-11-01

    Genes responding to Nod factors were picked up by the application of a differential display method for soybean suspension-cultured cells. Forty-five cDNA fragments derived from such genes were detected. Seven fragments (ssc1-ssc7) were successfully cloned. The putative product of genes corresponding to ssc1 was estimated to be a disease-resistance protein relating to the induction of the plant defense response against pathogens, and that corresponding to ssc7 was a sucrose transporter. Amino acid sequences deduced from full-length cDNA corresponding to ssc2 and ssc4 were investigated, and it was shown that these polypeptides were equipped with a leucine zipper motif and with phosphorylation sites that were targeted by tyrosin kinase and cAMP-dependent protein kinase, respectively. In a differential display experiment, the transcriptional levels of three genes corresponding to ssc2, ssc3 and ssc5 were estimated to be up-regulated at 6 h after initiation of the treatment and the remaining four were estimated to be down-regulated. However, transcription of the genes corresponding to all ssc was clearly repressed within 2 h after initiation of the treatment. Five of them were restored to their transcriptional level 6 h after initiation of the treatment, although the others were repressed throughout the experimental period.

  7. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  8. Rigidity of outermost MOTS: the initial data version

    NASA Astrophysics Data System (ADS)

    Galloway, Gregory J.

    2018-03-01

    In the paper Commun Anal Geom 16(1):217-229, 2008, a rigidity result was obtained for outermost marginally outer trapped surfaces (MOTSs) that do not admit metrics of positive scalar curvature. This allowed one to treat the "borderline case" in the author's work with R. Schoen concerning the topology of higher dimensional black holes (Commun Math Phys 266(2):571-576, 2006). The proof of this rigidity result involved bending the initial data manifold in the vicinity of the MOTS within the ambient spacetime. In this note we show how to circumvent this step, and thereby obtain a pure initial data version of this rigidity result and its consequence concerning the topology of black holes.

  9. Improved Estimates of Temporally Coherent Internal Tides and Energy Fluxes from Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    Satellite altimetry has opened a surprising new avenue to observing internal tides in the open ocean. The tidal surface signatures are very small, a few cm at most, but in many areas they are robust, owing to averaging over many years. By employing a simplified two dimensional wave fitting to the surface elevations in combination with climatological hydrography to define the relation between the surface height and the current and pressure at depth, we may obtain rough estimates of internal tide energy fluxes. Initial results near Hawaii with Topex/Poseidon (T/P) data show good agreement with detailed 3D (three dimensional) numerical models, but the altimeter picture is somewhat blurred owing to the widely spaced T/P tracks. The resolution may be enhanced somewhat by using data from the ERS-1 (ESA (European Space Agency) Remote Sensing) and ERS-2 satellite altimeters. The ERS satellite tracks are much more closely spaced (0.72 deg longitude vs. 2.83 deg for T/P), but the tidal estimates are less accurate than those for T/P. All altimeter estimates are also severely affected by noise in regions of high mesoscale variability, and we have obtained some success in reducing this contamination by employing a prior correction for mesoscale variability based on ten day detailed sea surface height maps developed by Le Traon and colleagues. These improvements allow us to more clearly define the internal tide surface field and the corresponding energy fluxes. Results from throughout the global ocean will be presented.

  10. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.

  11. The Improved Estimation of Ratio of Two Population Proportions

    ERIC Educational Resources Information Center

    Solanki, Ramkrishna S.; Singh, Housila P.

    2016-01-01

    In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…

  12. Quick estimate of oil discovery from gas-condensate reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarem, A.M.

    1966-10-24

    A quick method of estimating the depletion performance of gas-condensate reservoirs is presented by graphical representations. The method is based on correlations reported in the literature and expresses recoverable liquid as function of gas reserves, producing gas-oil ratio, and initial and final reservoir pressures. The amount of recoverable liquid reserves (RLR) under depletion conditions, is estimated from an equation which is given. Where the liquid-reserves are in stock-tank barrels the gas reserves are in Mcf, with the arbitrary constant, N calculated from one graphical representation by dividing fractional oil recovery by the initial gas-oil ratio and multiplying 10U6D for convenience.more » An equation is given for estimating the coefficient C. These factors (N and C) can be determined from the graphical representations. An example calculation is included.« less

  13. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  14. Estimation of Rice Crop Yields Using Random Forests in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. F.; Lin, H. S.; Nguyen, S. T.; Chen, C. R.

    2017-12-01

    Rice is globally one of the most important food crops, directly feeding more people than any other crops. Rice is not only the most important commodity, but also plays a critical role in the economy of Taiwan because it provides employment and income for large rural populations. The rice harvested area and production are thus monitored yearly due to the government's initiatives. Agronomic planners need such information for more precise assessment of food production to tackle issues of national food security and policymaking. This study aimed to develop a machine-learning approach using physical parameters to estimate rice crop yields in Taiwan. We processed the data for 2014 cropping seasons, following three main steps: (1) data pre-processing to construct input layers, including soil types and weather parameters (e.g., maxima and minima air temperature, precipitation, and solar radiation) obtained from meteorological stations across the country; (2) crop yield estimation using the random forests owing to its merits as it can process thousands of variables, estimate missing data, maintain the accuracy level when a large proportion of the data is missing, overcome most of over-fitting problems, and run fast and efficiently when handling large datasets; and (3) error verification. To execute the model, we separated the datasets into two groups of pixels: group-1 (70% of pixels) for training the model and group-2 (30% of pixels) for testing the model. Once the model is trained to produce small and stable out-of-bag error (i.e., the mean squared error between predicted and actual values), it can be used for estimating rice yields of cropping seasons. The results obtained from the random forests-based regression were compared with the actual yield statistics indicated the values of root mean square error (RMSE) and mean absolute error (MAE) achieved for the first rice crop were respectively 6.2% and 2.7%, while those for the second rice crop were 5.3% and 2

  15. Disruption of State Estimation in the Human Lateral Cerebellum

    PubMed Central

    Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James

    2007-01-01

    The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990

  16. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  17. A UAV and S2A data-based estimation of the initial biomass of green algae in the South Yellow Sea.

    PubMed

    Xu, Fuxiang; Gao, Zhiqiang; Jiang, Xiaopeng; Shang, Weitao; Ning, Jicai; Song, Debin; Ai, Jinquan

    2018-03-01

    Previous studies have shown that the initial biomass of green tide was the green algae attaching to Pyropia aquaculture rafts in the Southern Yellow Sea. In this study, the green algae was identified with unmanned aerial vehicle (UAV), an biomass estimation model was proposed for green algae biomass in the radial sand ridge area based on Sentinel-2A image (S2A) and UAV images. The result showed that the green algae was detected highly accurately with the normalized green-red difference index (NGRDI); approximately 1340 tons and 700 tons of green algae were attached to rafts and raft ropes respectively, and the lower biomass might be the main cause for the smaller scale of green tide in 2017. In addition, UAV play an important role in raft-attaching green algae monitoring and long-term research of its biomass would provide a scientific basis for the control and forecast of green tide in the Yellow Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Laser photogrammetry improves size and demographic estimates for whale sharks

    PubMed Central

    Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  19. Optimal flight initiation distance.

    PubMed

    Cooper, William E; Frederick, William G

    2007-01-07

    Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.

  20. Estimating satellite pose and motion parameters using a novelty filter and neural net tracker

    NASA Technical Reports Server (NTRS)

    Lee, Andrew J.; Casasent, David; Vermeulen, Pieter; Barnard, Etienne

    1989-01-01

    A system for determining the position, orientation and motion of a satellite with respect to a robotic spacecraft using video data is advanced. This system utilizes two levels of pose and motion estimation: an initial system which provides coarse estimates of pose and motion, and a second system which uses the coarse estimates and further processing to provide finer pose and motion estimates. The present paper emphasizes the initial coarse pose and motion estimation sybsystem. This subsystem utilizes novelty detection and filtering for locating novel parts and a neural net tracker to track these parts over time. Results of using this system on a sequence of images of a spin stabilized satellite are presented.

  1. Simulating estimation of California fossil fuel and biosphere carbon dioxide exchanges combining in situ tower and satellite column observations

    DOE PAGES

    Fischer, Marc L.; Parazoo, Nicholas; Brophy, Kieran; ...

    2017-03-09

    Here, we report simulation experiments estimating the uncertainties in California regional fossil fuel and biosphere CO 2 exchanges that might be obtained by using an atmospheric inverse modeling system driven by the combination of ground-based observations of radiocarbon and total CO 2, together with column-mean CO 2 observations from NASA's Orbiting Carbon Observatory (OCO-2). The work includes an initial examination of statistical uncertainties in prior models for CO 2 exchange, in radiocarbon-based fossil fuel CO 2 measurements, in OCO-2 measurements, and in a regional atmospheric transport modeling system. Using these nominal assumptions for measurement and model uncertainties, we find thatmore » flask measurements of radiocarbon and total CO 2 at 10 towers can be used to distinguish between different fossil fuel emission data products for major urban regions of California. We then show that the combination of flask and OCO-2 observations yields posterior uncertainties in monthly-mean fossil fuel emissions of ~5–10%, levels likely useful for policy relevant evaluation of bottom-up fossil fuel emission estimates. Similarly, we find that inversions yield uncertainties in monthly biosphere CO 2 exchange of ~6%–12%, depending on season, providing useful information on net carbon uptake in California's forests and agricultural lands. Finally, initial sensitivity analysis suggests that obtaining the above results requires control of systematic biases below approximately 0.5 ppm, placing requirements on accuracy of the atmospheric measurements, background subtraction, and atmospheric transport modeling.« less

  2. Simulating estimation of California fossil fuel and biosphere carbon dioxide exchanges combining in situ tower and satellite column observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Marc L.; Parazoo, Nicholas; Brophy, Kieran

    Here, we report simulation experiments estimating the uncertainties in California regional fossil fuel and biosphere CO 2 exchanges that might be obtained by using an atmospheric inverse modeling system driven by the combination of ground-based observations of radiocarbon and total CO 2, together with column-mean CO 2 observations from NASA's Orbiting Carbon Observatory (OCO-2). The work includes an initial examination of statistical uncertainties in prior models for CO 2 exchange, in radiocarbon-based fossil fuel CO 2 measurements, in OCO-2 measurements, and in a regional atmospheric transport modeling system. Using these nominal assumptions for measurement and model uncertainties, we find thatmore » flask measurements of radiocarbon and total CO 2 at 10 towers can be used to distinguish between different fossil fuel emission data products for major urban regions of California. We then show that the combination of flask and OCO-2 observations yields posterior uncertainties in monthly-mean fossil fuel emissions of ~5–10%, levels likely useful for policy relevant evaluation of bottom-up fossil fuel emission estimates. Similarly, we find that inversions yield uncertainties in monthly biosphere CO 2 exchange of ~6%–12%, depending on season, providing useful information on net carbon uptake in California's forests and agricultural lands. Finally, initial sensitivity analysis suggests that obtaining the above results requires control of systematic biases below approximately 0.5 ppm, placing requirements on accuracy of the atmospheric measurements, background subtraction, and atmospheric transport modeling.« less

  3. Estimating monthly streamflow values by cokriging

    USGS Publications Warehouse

    Solow, A.R.; Gorelick, S.M.

    1986-01-01

    Cokriging is applied to estimation of missing monthly streamflow values in three records from gaging stations in west central Virginia. Missing values are estimated from optimal consideration of the pattern of auto- and cross-correlation among standardized residual log-flow records. Investigation of the sensitivity of estimation to data configuration showed that when observations are available within two months of a missing value, estimation is improved by accounting for correlation. Concurrent and lag-one observations tend to screen the influence of other available observations. Three models of covariance structure in residual log-flow records are compared using cross-validation. Models differ in how much monthly variation they allow in covariance. Precision of estimation, reflected in mean squared error (MSE), proved to be insensitive to this choice. Cross-validation is suggested as a tool for choosing an inverse transformation when an initial nonlinear transformation is applied to flow values. ?? 1986 Plenum Publishing Corporation.

  4. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort

  5. Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.

    PubMed

    Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth

    2018-03-26

    When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.

  6. The utility of online panel surveys versus computer-assisted interviews in obtaining substance-use prevalence estimates in the Netherlands.

    PubMed

    Spijkerman, Renske; Knibbe, Ronald; Knoops, Kim; Van De Mheen, Dike; Van Den Eijnden, Regina

    2009-10-01

    Rather than using the traditional, costly method of personal interviews in a general population sample, substance-use prevalence rates can be derived more conveniently from data collected among members of an online access panel. To examine the utility of this method, we compared the outcomes of an online survey with those obtained with the computer-assisted personal interviews (CAPI) method. Data were gathered from a large sample of online panellists and in a two-stage stratified sample of the Dutch population using the CAPI method. The Netherlands. Participants  The online sample comprised 57 125 Dutch online panellists (15-64 years) of Survey Sampling International LLC (SSI), and the CAPI cohort 7204 respondents (15-64 years). All participants answered identical questions about their use of alcohol, cannabis, ecstasy, cocaine and performance-enhancing drugs. The CAPI respondents were asked additionally about internet access and online panel membership. Both data sets were weighted statistically according to the distribution of demographic characteristics of the general Dutch population. Response rates were 35.5% (n = 20 282) for the online panel cohort and 62.7% (n = 4516) for the CAPI cohort. The data showed almost consistently lower substance-use prevalence rates for the CAPI respondents. Although the observed differences could be due to bias in both data sets, coverage and non-response bias were higher in the online panel survey. Despite its economic advantage, the online panel survey showed stronger non-response and coverage bias than the CAPI survey, leading to less reliable estimates of substance use in the general population. © 2009 The Authors. Journal compilation © 2009 Society for the Study of Addiction.

  7. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  8. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  9. Initiation devices, initiation systems including initiation devices and related methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki

    Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materialsmore » include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.« less

  10. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of

  11. A preliminary study of the application of HCMM satellite data to define initial and boundary conditions for numerical models: A case study in St. Louis, Missouri

    NASA Technical Reports Server (NTRS)

    Vukovich, F. M. (Principal Investigator)

    1982-01-01

    Infrared and visible HCMM data were used to examine the potential application of these data to define initial and boundary conditions for mesoscale numerical models. Various boundary layer models were used to calculate the distribution of the surface heat flux, specific humidity depression (the difference between the specific humidity in the air at approxmately the 10 m level and the specific humidity at the ground), and the eddy vicosity in a 72 km by 72 km area centered about St. Louis, Missouri. Various aspects of the implications of the results on the meteorology of St. Louis are discussed. Overall, the results indicated that a reasonable estimate of the surface heat flux, urban albedo, ground temperature, and specific humidity depression can be obtained using HCMM satellite data. Values of the ground-specific humidity can be obtained if the distribution of the air-specific humidity is available. More research is required in estimating the absolute magnitude of the specific humidity depression because calculations may be sensitive to model parameters.

  12. Uniform gradient estimates on manifolds with a boundary and applications

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Juan; Thalmaier, Anton; Thompson, James

    2018-04-01

    We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.

  13. A method of estimating log weights.

    Treesearch

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  14. Reading Achievement across Three Language Groups: Growth Estimates for Overall Reading and Reading Subskills Obtained with the Early Childhood Longitudinal Survey

    ERIC Educational Resources Information Center

    Roberts, Greg; Mohammed, Sarojani S.; Vaughn, Sharon

    2010-01-01

    This study estimated normative reading trajectories for the population of English-proficient language minority students attending U.S. public elementary schools. Achievement of English-language learners (ELLs) was evaluated in terms of native English speakers' progress, and estimates were adjusted for the effects of socioeconomic status (SES). The…

  15. Estimation of temperature in micromaser-type systems

    NASA Astrophysics Data System (ADS)

    Farajollahi, B.; Jafarzadeh, M.; Rangani Jahromi, H.; Amniat-Talab, M.

    2018-06-01

    We address the estimation of the number of photons and temperature in a micromaser-type system with Fock state and thermal fields. We analyze the behavior of the quantum Fisher information (QFI) for both fields. In particular, we show that in the Fock state field model, the QFI for non-entangled initial state of the atoms increases monotonously with time, while for entangled initial state of the atoms, it shows oscillatory behavior, leading to non-Markovian dynamics. Moreover, it is observed that the QFI, entropy of entanglement and fidelity have collapse and revival behavior. Focusing on each period that the collapses and revivals occur, we see that the optimal points of the QFI and entanglement coincide. In addition, when one of the subsystems evolved state fidelity becomes maximum, the QFI also achieves its maximum. We also address the evolved fidelity versus the initial state as a good witness of non-Markovianity. Moreover, we interestingly find that the entropy of the composite system can be used as a witness of non-Markovian evolution of the subsystems. For the thermal field model, we similarly investigate the relation among the QFI associated with the temperature, von Neumann entropy, and fidelity. In particular, it is found that at the instants when the maximum values of the QFI are achieved, the entanglement between the two-qubit system and the environment is maximized while the entanglement between the probe and its environment is minimized. Moreover, we show that the thermometry may lead to optimal estimation of practical temperatures. Besides, extending our computation to the two-qubit system, we find that using a two-qubit probe generally leads to more effective estimation than the one-qubit scenario. Finally, we show that initial state entanglement plays a key role in the advent of non-Markovianity and determination of its strength in the composite system and its subsystems.

  16. 32 CFR 701.30 - Initial Denial Authority (IDA).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Initial Denial Authority (IDA). 701.30 Section 701.30 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY... geographical areas of responsibility or chain of command; fees; to review a fee estimate; and to confirm that...

  17. 32 CFR 701.30 - Initial Denial Authority (IDA).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Initial Denial Authority (IDA). 701.30 Section 701.30 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY... geographical areas of responsibility or chain of command; fees; to review a fee estimate; and to confirm that...

  18. 32 CFR 701.30 - Initial Denial Authority (IDA).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Initial Denial Authority (IDA). 701.30 Section 701.30 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY... geographical areas of responsibility or chain of command; fees; to review a fee estimate; and to confirm that...

  19. 32 CFR 701.30 - Initial Denial Authority (IDA).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Initial Denial Authority (IDA). 701.30 Section 701.30 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY... geographical areas of responsibility or chain of command; fees; to review a fee estimate; and to confirm that...

  20. Improved Critical Eigenfunction Restriction Estimates on Riemannian Surfaces with Nonpositive Curvature

    NASA Astrophysics Data System (ADS)

    Xi, Yakun; Zhang, Cheng

    2017-03-01

    We show that one can obtain improved L 4 geodesic restriction estimates for eigenfunctions on compact Riemannian surfaces with nonpositive curvature. We achieve this by adapting Sogge's strategy in (Improved critical eigenfunction estimates on manifolds of nonpositive curvature, Preprint). We first combine the improved L 2 restriction estimate of Blair and Sogge (Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint) and the classical improved {L^∞} estimate of Bérard to obtain an improved weak-type L 4 restriction estimate. We then upgrade this weak estimate to a strong one by using the improved Lorentz space estimate of Bak and Seeger (Math Res Lett 18(4):767-781, 2011). This estimate improves the L 4 restriction estimate of Burq et al. (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009) by a power of {(log logλ)^{-1}}. Moreover, in the case of compact hyperbolic surfaces, we obtain further improvements in terms of {(logλ)^{-1}} by applying the ideas from (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) and (Blair and Sogge, Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint). We are able to compute various constants that appeared in (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) explicitly, by proving detailed oscillatory integral estimates and lifting calculations to the universal cover H^2.

  1. Estimation of the Vertical Distribution of Radiocesium in Soil on the Basis of the Characteristics of Gamma-Ray Spectra Obtained via Aerial Radiation Monitoring Using an Unmanned Helicopter.

    PubMed

    Ochi, Kotaro; Sasaki, Miyuki; Ishida, Mutsushi; Hamamoto, Shoichiro; Nishimura, Taku; Sanada, Yukihisa

    2017-08-17

    After the Fukushima Daiichi Nuclear Power Plant accident, the vertical distribution of radiocesium in soil has been investigated to better understand the behavior of radiocesium in the environment. The typical method used for measuring the vertical distribution of radiocesium is troublesome because it requires collection and measurement of the activity of soil samples. In this study, we established a method of estimating the vertical distribution of radiocesium by focusing on the characteristics of gamma-ray spectra obtained via aerial radiation monitoring using an unmanned helicopter. The estimates are based on actual measurement data collected at an extended farm. In this method, the change in the ratio of direct gamma rays to scattered gamma rays at various depths in the soil was utilized to quantify the vertical distribution of radiocesium. The results show a positive correlation between the abovementioned and the actual vertical distributions of radiocesium measured in the soil samples. A vertical distribution map was created on the basis of this ratio using a simple equation derived from the abovementioned correlation. This technique can provide a novel approach for effective selection of high-priority areas that require decontamination.

  2. Estimation of the Vertical Distribution of Radiocesium in Soil on the Basis of the Characteristics of Gamma-Ray Spectra Obtained via Aerial Radiation Monitoring Using an Unmanned Helicopter

    PubMed Central

    Ochi, Kotaro; Sasaki, Miyuki; Ishida, Mutsushi; Sanada, Yukihisa

    2017-01-01

    After the Fukushima Daiichi Nuclear Power Plant accident, the vertical distribution of radiocesium in soil has been investigated to better understand the behavior of radiocesium in the environment. The typical method used for measuring the vertical distribution of radiocesium is troublesome because it requires collection and measurement of the activity of soil samples. In this study, we established a method of estimating the vertical distribution of radiocesium by focusing on the characteristics of gamma-ray spectra obtained via aerial radiation monitoring using an unmanned helicopter. The estimates are based on actual measurement data collected at an extended farm. In this method, the change in the ratio of direct gamma rays to scattered gamma rays at various depths in the soil was utilized to quantify the vertical distribution of radiocesium. The results show a positive correlation between the abovementioned and the actual vertical distributions of radiocesium measured in the soil samples. A vertical distribution map was created on the basis of this ratio using a simple equation derived from the abovementioned correlation. This technique can provide a novel approach for effective selection of high-priority areas that require decontamination. PMID:28817098

  3. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    NASA Astrophysics Data System (ADS)

    Fusco, Tilde; Petrella, Angelo; Tanda, Mario

    2009-12-01

    The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  4. Novel methods to estimate the enantiomeric ratio and the kinetic parameters of enantiospecific enzymatic reactions.

    PubMed

    Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.

    2001-03-08

    1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.

  5. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  6. Genetic and environmental influences on cannabis use initiation and problematic use: a meta-analysis of twin studies

    PubMed Central

    Verweij, Karin J.H.; Zietsch, Brendan P.; Lynskey, Michael T.; Medland, Sarah E.; Neale, Michael C.; Martin, Nicholas G.; Boomsma, Dorret I.; Vink, Jacqueline M.

    2009-01-01

    Background Because cannabis use is associated with social, physical and psychological problems, it is important to know what causes some individuals to initiate cannabis use and a subset of those to become problematic users. Previous twin studies found evidence for both genetic and environmental influences on vulnerability, but due to considerable variation in the results it is difficult to draw clear conclusions regarding the relative magnitude of these influences. Method A systematic literature search identified 28 twin studies on cannabis use initiation and 24 studies on problematic cannabis use. The proportion of total variance accounted for by genes (A), shared environment (C), and unshared environment (E) in (1) initiation of cannabis use and (2) problematic cannabis use was calculated by averaging corresponding A, C, and E estimates across studies from independent cohorts and weighting by sample size. Results For cannabis use initiation, A, C, and E estimates were 48%, 25% and 27% in males and 40%, 39% and 21% in females. For problematic cannabis use A, C, and E estimates were 51%, 20% and 29% for males and 59%, 15% and 26% for females. Confidence intervals of these estimates are considerably narrower than those in the source studies. Conclusions Our results indicate that vulnerability to both cannabis use initiation and problematic use was significantly influenced by A, C, and E. There was a trend for a greater C and lesser A component for cannabis initiation as compared to problematic use for females. PMID:20402985

  7. STATEQ: a nonlinear least-squares code for obtaining Martin thermodynamic representations of fluids in the gaseous and dense gaseous regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milora, S. L.

    1976-02-01

    The use of the code NLIN (IBM Share Program No. 1428) to obtain empirical thermodynamic pressure-volume-temperature (P-V-T) relationships for substances in the gaseous and dense gaseous states is described. When sufficient experimental data exist, the code STATEQ will provide least-squares estimates for the 21 parameters of the Martin model. Another code, APPROX, is described which also obtains parameter estimates for the model by making use of the approximate generalized behavior of fluids. Use of the codes is illustrated in obtaining thermodynamic representations for isobutane. (auth)

  8. Methods for estimating missing human skeletal element osteometric dimensions employed in the revised fully technique for estimating stature.

    PubMed

    Auerbach, Benjamin M

    2011-05-01

    One of the greatest limitations to the application of the revised Fully anatomical stature estimation method is the inability to measure some of the skeletal elements required in its calculation. These element dimensions cannot be obtained due to taphonomic factors, incomplete excavation, or disease processes, and result in missing data. This study examines methods of imputing these missing dimensions using observable Fully measurements from the skeleton and the accuracy of incorporating these missing element estimations into anatomical stature reconstruction. These are further assessed against stature estimations obtained from mathematical regression formulae for the lower limb bones (femur and tibia). Two thousand seven hundred and seventeen North and South American indigenous skeletons were measured, and subsets of these with observable Fully dimensions were used to simulate missing elements and create estimation methods and equations. Comparisons were made directly between anatomically reconstructed statures and mathematically derived statures, as well as with anatomically derived statures with imputed missing dimensions. These analyses demonstrate that, while mathematical stature estimations are more accurate, anatomical statures incorporating missing dimensions are not appreciably less accurate and are more precise. The anatomical stature estimation method using imputed missing dimensions is supported. Missing element estimation, however, is limited to the vertebral column (only when lumbar vertebrae are present) and to talocalcaneal height (only when femora and tibiae are present). Crania, entire vertebral columns, and femoral or tibial lengths cannot be reliably estimated. Further discussion of the applicability of these methods is discussed. Copyright © 2011 Wiley-Liss, Inc.

  9. Age estimation of burbot using pectoral fin rays, brachiostegal rays, and otoliths

    USGS Publications Warehouse

    Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2014-01-01

    Throughout much of its native distribution, burbot (Lota lota) is a species of conservation concern. Understanding dynamic rate functions is critical for the effective management of sensitive burbot populations, which necessitates accurate and precise age estimates. Managing sensitive burbot populations requires an accurate and precise non-lethal alternative. In an effort to identify a non-lethal ageing structure, we compared the precision of age estimates obtained from otoliths, pectoral fin rays, dorsal fin rays and branchiostegal rays from 208 burbot collected from the Green River drainage, Wyoming. Additionally, we compared the accuracy of age estimates from pectoral fin rays, dorsal fin rays and branchiostegal rays to those of otoliths. Dorsal fin rays were immediately deemed a poor ageing structure and removed from further analysis. Age-bias plots of consensus ages derived from branchiostegal rays and pectoral fin rays were appreciably different from those obtained from otoliths. Exact agreement between readers and reader confidence was highest for otoliths and lowest for branchiostegal rays. Age-bias plots indicated that age estimates obtained from branchiostegal rays and pectoral fin rays were substantially different from age estimates obtained from otoliths. Our results indicate that otoliths provide the most precise age estimates for burbot.

  10. Estimation of delays and other parameters in nonlinear functional differential equations

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  11. Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Shenk, W. E.; Skillman, W.

    1974-01-01

    An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.

  12. Initial-boundary value problem to 2D Boussinesq equations for MHD convection with stratification effects

    NASA Astrophysics Data System (ADS)

    Bian, Dongfen; Liu, Jitao

    2017-12-01

    This paper is concerned with the initial-boundary value problem to 2D magnetohydrodynamics-Boussinesq system with the temperature-dependent viscosity, thermal diffusivity and electrical conductivity. First, we establish the global weak solutions under the minimal initial assumption. Then by imposing higher regularity assumption on the initial data, we obtain the global strong solution with uniqueness. Moreover, the exponential decay rates of weak solutions and strong solution are obtained respectively.

  13. AFRL Ludwieg Tube Initial Performance

    DTIC Science & Technology

    2017-11-01

    Nov 2016 14. ABSTRACT The Air Force Research Laboratory has developed and constructed a Ludwieg tube wind tunnel for hypersonic experimental ...Ludwieg tube wind tunnel for hypersonic experimental research. This wind tunnel is now operational and its initial performance has been...opening and the first expansion wave reflection was 100 ms, as expected. About 80 ms of quasi -steady pressure was obtained after the valve-opening

  14. Initial home health outcomes under prospective payment.

    PubMed

    Schlenker, Robert E; Powell, Martha C; Goodrich, Glenn K

    2005-02-01

    To assess initial changes in home health patient outcomes under Medicare's home health Prospective Payment System (PPS), implemented by the Centers for Medicare and Medicaid Services (CMS) in October 2000. Pre-PPS and early PPS data were obtained from CMS Outcome and Assessment Information Set (OASIS) and Medicare claims files. Regression analysis was applied to national random samples (n=164,810) to estimate pre-PPS/PPS outcome and visit-per-episode changes. Outcome episodes were constructed from OASIS data and linked with Medicare claims data on visits. Outcome changes (risk adjusted) were mixed and generally modest. Favorable changes included higher improvement rates under PPS for functioning and dyspnea, higher community discharge rates, and lower hospitalization and emergent care rates. Most stabilization (nonworsening) outcome rates also increased. However, improvement rates were lower under PPS for wounds, incontinence, and cognitive and emotional/behavioral outcomes. Total visits per episode (case-mix adjusted) declined 16.6 percent although therapy visits increased by 8.4 percent. The outcome and visit results suggest improved system efficiency under PPS (fewer visits, similar outcomes). However, declines in several improvement rates merit ongoing monitoring, as do subsequent (posthome health) hospitalization and emergent care use. Since only the early PPS period was examined, longer-term analyses are needed.

  15. Optimal estimation of the optomechanical coupling strength

    NASA Astrophysics Data System (ADS)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  16. Improved depth estimation with the light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  17. Estimating sediment discharge: Appendix D

    USGS Publications Warehouse

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with

  18. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    NASA Astrophysics Data System (ADS)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  19. Addison disease - diagnosis and initial management.

    PubMed

    O'Connell, Susan; Siafarikas, Aris

    2010-11-01

    Adrenal insufficiency is a rare disease caused by either primary adrenal failure (Addison disease) or by impairment of the hypothalamic-pituitary-adrenal axis. Steroid replacement therapy normalises quality of life, however, adherence can be problematic. This article provides information on adrenal insufficiency focusing on awareness of initial symptoms and on risk scenarios, emergency management and baseline investigations, complete investigations and long term management. Early recognition of adrenal insufficiency is essential to avoid associated morbidity and mortality. Initial diagnosis and decision to treat are based on history and physical examination. Appropriate management includes emergency resuscitation and steroid administration. Initial investigations can include sodium, potassium and blood glucose levels. However, complete investigations can be deferred. Specialist advice should be obtained and long term management includes a Team Care Arrangement. For patients, an emergency plan and emergency identification are essential.

  20. Stroke as the Initial Manifestation of Atrial Fibrillation: The Framingham Heart Study.

    PubMed

    Lubitz, Steven A; Yin, Xiaoyan; McManus, David D; Weng, Lu-Chen; Aparicio, Hugo J; Walkey, Allan J; Rafael Romero, Jose; Kase, Carlos S; Ellinor, Patrick T; Wolf, Philip A; Seshadri, Sudha; Benjamin, Emelia J

    2017-02-01

    To prevent strokes that may occur as the first manifestation of atrial fibrillation (AF), screening programs have been proposed to identify patients with undiagnosed AF who may be eligible for treatment with anticoagulation. However, the frequency with which patients with AF present with stroke as the initial manifestation of the arrhythmia is unknown. We estimated the frequency with which AF may present as a stroke in 1809 community-based Framingham Heart Study participants with first-detected AF and without previous strokes, by tabulating the frequencies of strokes occurring on the same day, within 30 days before, 90 days before, and 365 days before first-detected AF. Using previously reported AF incidence rates, we estimated the incidence of strokes that may represent the initial manifestation of AF. We observed 87 strokes that occurred ≤1 year before AF detection, corresponding to 1.7% on the same day, 3.4% within 30 days before, 3.7% within 90 days before, and 4.8% ≤1 year before AF detection. We estimated that strokes may present as the initial manifestation of AF at a rate of 2 to 5 per 10 000 person-years, in both men and women. We observed that stroke is an uncommon but measureable presenting feature of AF. Our data imply that emphasizing cost-effectiveness of population-wide AF-screening efforts will be important given the relative infrequency with which stroke represents the initial manifestation of AF. © 2017 American Heart Association, Inc.

  1. State estimation for autonomous flight in cluttered environments

    NASA Astrophysics Data System (ADS)

    Langelaan, Jacob Willem

    Safe, autonomous operation in complex, cluttered environments is a critical challenge facing autonomous mobile systems. The research described in this dissertation was motivated by a particularly difficult example of autonomous mobility: flight of a small Unmanned Aerial Vehicle (UAV) through a forest. In cluttered environments (such as forests or natural and urban canyons) signals from navigation beacons such as GPS may frequently be occluded. Direct measurements of vehicle position are therefore unavailable, and information required for flight control, obstacle avoidance, and navigation must be obtained using only on-board sensors. However, payload limitations of small UAVs restrict both the mass and physical dimensions of sensors that can be carried. This dissertation describes the development and proof-of-concept demonstration of a navigation system that uses only a low-cost inertial measurement unit and a monocular camera. Micro electromechanical inertial measurements units are well suited to small UAV applications and provide measurements of acceleration and angular rate. However, they do not provide information about nearby obstacles (needed for collision avoidance) and their noise and bias characteristics lead to unbounded growth in computed position. A monocular camera can provide bearings to nearby obstacles and landmarks. These bearings can be used both to enable obstacle avoidance and to aid navigation. Presented here is a solution to the problem of estimating vehicle state (position, orientation and velocity) as well as positions of obstacles in the environment using only inertial measurements and bearings to obstacles. This is a highly nonlinear estimation problem, and standard estimation techniques such as the Extended Kalman Filter are prone to divergence in this application. In this dissertation a Sigma Point Kalman Filter is implemented, resulting in an estimator which is able to cope with the significant nonlinearities in the system equations and

  2. Charging conditions research to increase the initial projected velocity at different initial charge temperatures

    NASA Astrophysics Data System (ADS)

    Ishchenko, Aleksandr; Burkin, Viktor; Kasimov, Vladimir; Samorokova, Nina; Zykova, Angelica; Diachkovskii, Alexei

    2017-11-01

    The problems of the defense industry occupy the most important place in the constantly developing modern world. The daily development of defense technology does not stop, nor do studies on internal ballistics. The scientists of the whole world are faced with the task of managing the main characteristics of a ballistic experiment. The main characteristics of the ballistic experiment are the maximum pressure in the combustion chamber Pmax and the projected velocity at the time of barrel leaving UM. During the work the combustion law of the new high-energy fuel was determined in a ballistic experiment for different initial temperatures. This combustion law was used for a parametric study of depending Pmax and UM from a powder charge mass and a traveling charge was carried out. The optimal conditions for loading were obtained for improving the initial velocity at pressures up to 600 MPa for different initial temperatures. In this paper, one of the most promising schemes of throwing is considered, as well as a method for increasing the muzzle velocity of a projected element to 3317 m/s.

  3. Estimation of age at death from the pubic symphysis and the auricular surface of the ilium using a smoothing procedure.

    PubMed

    Martins, Rui; Oliveira, Paulo Eduardo; Schmitt, Aurore

    2012-06-10

    We discuss here the estimation of age at death from two indicators (pubic symphysis and the sacro-pelvic surface of the ilium) based on four different osteological series from Portugal, Great-Britain, South Africa or USA (European origin). These samples and the scoring system of the two indicators were used by Schmitt et al. (2002), applying the methodology proposed by Lucy et al. (1996). In the present work, the same data was processed using a modification of the empirical method proposed by Lucy et al. (2002). The various probability distributions are estimated from training data by using kernel density procedures and Jackknife methodology. Bayes's theorem is then used to produce the posterior distribution from which point and interval estimates may be made. This statistical approach reduces the bias of the estimates to less than 70% of what was obtained by the initial method. This reduction going up to 52% if knowledge of sex of the individual is available, and produces an age for all the individuals that improves age at death assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  5. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  6. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto

  7. Metal Accretion onto White Dwarfs. I. The Approximate Approach Based on Estimates of Diffusion Timescales

    NASA Astrophysics Data System (ADS)

    Fontaine, G.; Brassard, P.; Dufour, P.; Tremblay, P.-E.

    2015-06-01

    The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Some important insights into the process may be derived using an approximate approach which combines static stellar models with estimates of diffusion timescales at the base of the outer convection zone or, in its absence, at the photosphere. Until recently, and to our knowledge, values of diffusion timescales in white dwarfs have all been obtained on the basis of the same physics as that developed initially by Paquette et al., including their diffusion coefficients and thermal diffusion coefficients. In view of the recent exciting discoveries of a plethora of metals (including some never seen before) polluting the atmospheres of an increasing number of cool white dwarfs, we felt that a new look at the estimates of settling timescales would be worthwhile. We thus provide improved estimates of diffusion timescales for all 27 elements from Li to Cu in the periodic table in a wide range of the surface gravity-effective temperature domain and for both DA and non-DA stars.

  8. Initial conditions in high-energy collisions

    NASA Astrophysics Data System (ADS)

    Petreska, Elena

    This thesis is focused on the initial stages of high-energy collisions in the saturation regime. We start by extending the McLerran-Venugopalan distribution of color sources in the initial wave-function of nuclei in heavy-ion collisions. We derive a fourth-order operator in the action and discuss its relevance for the description of color charge distributions in protons in high-energy experiments. We calculate the dipole scattering amplitude in proton-proton collisions with the quartic action and find an agreement with experimental data. We also obtain a modification to the fluctuation parameter of the negative binomial distribution of particle multiplicities in proton-proton experiments. The result implies an advancement of the fourth-order action towards Gaussian when the energy is increased. Finally, we calculate perturbatively the expectation value of the magnetic Wilson loop operator in the first moments of heavy-ion collisions. For the magnetic flux we obtain a first non-trivial term that is proportional to the square of the area of the loop. The result is close to numerical calculations for small area loops.

  9. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  10. Are Low-order Covariance Estimates Useful in Error Analyses?

    NASA Astrophysics Data System (ADS)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  11. Mortality estimation from carcass searches using the R-package carcass: a tutorial

    USGS Publications Warehouse

    Korner-Nievergelt, Fränzi; Behr, Oliver; Brinkmann, Robert; Etterson, Matthew A.; Huso, Manuela M. P.; Dalthorp, Daniel; Korner-Nievergelt, Pius; Roth, Tobias; Niermann, Ivo

    2015-01-01

    This article is a tutorial for the R-package carcass. It starts with a short overview of common methods used to estimate mortality based on carcass searches. Then, it guides step by step through a simple example. First, the proportion of animals that fall into the search area is estimated. Second, carcass persistence time is estimated based on experimental data. Third, searcher efficiency is estimated. Fourth, these three estimated parameters are combined to obtain the probability that an animal killed is found by an observer. Finally, this probability is used together with the observed number of carcasses found to obtain an estimate for the total number of killed animals together with a credible interval.

  12. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    PubMed Central

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277

  13. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    DTIC Science & Technology

    2016-03-01

    2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...Automated Information System MAIS OE - MAIS Original Estimate MAR – MAIS Annual Report MDA - Milestone Decision Authority MDD - Materiel Development...management systems supporting diverse operational functions and the warfighter in decision making and financial reporting . These disparate, non

  14. Fayette County Better Buildings Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capella, Arthur

    The Fayette County Better Buildings Initiative represented a comprehensive and collaborative approach to promoting and implementing energy efficiency improvements. The initiative was designed to focus on implementing energy efficiency improvements in residential units, while simultaneously supporting general marketing of the benefits of implementing energy efficiency measures. The ultimate goal of Fayette County’s Better Buildings Initiative was to implement a total of 1,067 residential energy efficiency retrofits with a minimum 15% estimated energy efficiency savings per unit. Program partners included: United States Department of Energy, Allegheny Power, and Private Industry Council of Westmoreland-Fayette, Fayette County Redevelopment Authority, and various local partners.more » The program was open to any Fayette County residents who own their home and meet the prequalifying conditions. The level of assistance offered depended upon household income and commitment to undergo a BPI – Certified Audit and implement energy efficiency measures, which aimed to result in at least a 15% reduction in energy usage. The initiative was designed to focus on implementing energy efficiency improvements in residential units, while simultaneously supporting general marketing of the benefits of implementing energy efficiency measures. Additionally, the program had components that involved recruitment and training for employment of persons in the energy sector (green jobs), as well as marketing and implementation of a commercial or community facilities component. The residential component of Fayette County’s Better Buildings Initiative involved a comprehensive approach, providing assistance to low- moderate- and market-rate homeowners. The initiative will also coordinate activities with local utility providers to further incentivize energy efficiency improvements among qualifying homeowners. The commercial component of Fayette County’s Better Building Initiative involved

  15. The Application of a Decision-Theoretic Model to Estimate the Public Health Impact of Vaporized Nicotine Product Initiation in the United States

    PubMed Central

    Borland, Ron; Villanti, Andrea C.; Niaura, Raymond; Yuan, Zhe; Zhang, Yian; Meza, Rafael; Holford, Theodore R.; Fong, Geoffrey T.; Cummings, K. Michael; Abrams, David B.

    2017-01-01

    Introduction: The public health impact of vaporized nicotine products (VNPs) such as e-cigarettes is unknown at this time. VNP uptake may encourage or deflect progression to cigarette smoking in those who would not have otherwise smoked, thereby undermining or accelerating reductions in smoking prevalence seen in recent years. Methods: The public health impact of VNP use are modeled in terms of how it alters smoking patterns among those who would have otherwise smoked cigarettes and among those who would not have otherwise smoked cigarettes in the absence of VNPs. The model incorporates transitions from trial to established VNP use, transitions to exclusive VNP and dual use, and the effects of cessation at later ages. Public health impact on deaths and life years lost is estimated for a recent birth cohort incorporating evidence-informed parameter estimates. Results: Based on current use patterns and conservative assumptions, we project a reduction of 21% in smoking-attributable deaths and of 20% in life years lost as a result of VNP use by the 1997 US birth cohort compared to a scenario without VNPs. In sensitivity analysis, health gains from VNP use are especially sensitive to VNP risks and VNP use rates among those likely to smoke cigarettes. Conclusions: Under most plausible scenarios, VNP use generally has a positive public health impact. However, very high VNP use rates could result in net harms. More accurate projections of VNP impacts will require better longitudinal measures of transitions into and out of VNP, cigarette and dual use. Implications: Previous models of VNP use do not incorporate whether youth and young adults initiating VNP would have been likely to have been a smoker in the absence of VNPs. This study provides a decision-theoretic model of VNP use in a young cohort that incorporates tendencies toward smoking and shows that, under most plausible scenarios, VNP use yields public health gains. The model makes explicit the type of surveillance

  16. Planning, Implementing, and Documenting an Innovative Statewide Occupational Initiative.

    ERIC Educational Resources Information Center

    Floyd, Jerald D.

    During an 18-month demonstration period, the Illinois Occupational Program Initiative (IOPI), funded through the state's Division of Alcoholism, demonstrated the feasibility of funding individual contractors to create and market employee assistance programs (EAPs) to representatives of industry and labor. Cost estimates for the EAPs were to cover…

  17. Estimating child mortality and modelling its age pattern for India.

    PubMed

    Roy, S G

    1989-06-01

    "Using data [for India] on proportions of children dead...estimates of infant and child mortality are...obtained by Sullivan and Trussell modifications of [the] Brass basic method. The estimate of child survivorship function derived after logit smoothing appears to be more reliable than that obtained by the Census Actuary. The age pattern of childhood mortality is suitably modelled by [a] Weibull function defining the probability of surviving from birth to a specified age and involving two parameters of level and shape. A recently developed linearization procedure based on [a] graphical approach is adopted for estimating the parameters of the function." excerpt

  18. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but

  19. Estimation of Tree Position and STEM Diameter Using Simultaneous Localization and Mapping with Data from a Backpack-Mounted Laser Scanner

    NASA Astrophysics Data System (ADS)

    Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.

    2017-10-01

    A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.

  20. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, Xin-ran; Wang, Xin

    2017-01-01

    A huge quantity of too-short-arc (TSA) observational data have been obtained in sky surveys of space objects. However, reasonable results for the TSAs can hardly be obtained with the classical methods of initial orbit determination (IOD). In this paper, the IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selections of the optimized variables and the corresponding genetic operators for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  1. Evaluating science return in space exploration initiative architectures

    NASA Technical Reports Server (NTRS)

    Budden, Nancy Ann; Spudis, Paul D.

    1993-01-01

    Science is an important aspect of the Space Exploration Initiative, a program to explore the Moon and Mars with people and machines. Different SEI mission architectures are evaluated on the basis of three variables: access (to the planet's surface), capability (including number of crew, equipment, and supporting infrastructure), and time (being the total number of man-hours available for scientific activities). This technique allows us to estimate the scientific return to be expected from different architectures and from different implementations of the same architecture. Our methodology allows us to maximize the scientific return from the initiative by illuminating the different emphases and returns that result from the alternative architectural decisions.

  2. Investing in breastfeeding - the world breastfeeding costing initiative.

    PubMed

    Holla-Bhar, Radha; Iellamo, Alessandro; Gupta, Arun; Smith, Julie P; Dadhich, Jai Prakash

    2015-01-01

    Despite scientific evidence substantiating the importance of breastfeeding in child survival and development and its economic benefits, assessments show gaps in many countries' implementation of the 2003 WHO and UNICEF Global Strategy for Infant and Young Child Feeding (Global Strategy). Optimal breastfeeding is a particular example: initiation of breastfeeding within the first hour of birth, exclusive breastfeeding for the first six months; and continued breastfeeding for two years or more, together with safe, adequate, appropriate, responsive complementary feeding starting in the sixth month. While the understanding of "optimal" may vary among countries, there is a need for governments to facilitate an enabling environment for women to achieve optimal breastfeeding. Lack of financial resources for key programs is a major impediment, making economic perspectives important for implementation. Globally, while achieving optimal breastfeeding could prevent more than 800,000 under five deaths annually, in 2013, US$58 billion was spent on commercial baby food including milk formula. Support for improved breastfeeding is inadequately prioritized by policy and practice internationally. The World Breastfeeding Costing Initiative (WBCi) launched in 2013, attempts to determine the financial investment that is necessary to implement the Global Strategy, and to introduce a tool to estimate the costs for individual countries. The article presents detailed cost estimates for implementing the Global Strategy, and outlines the WBCi Financial Planning Tool. Estimates use demographic data from UNICEF's State of the World's Children 2013. The WBCi takes a programmatic approach to scaling up interventions, including policy and planning, health and nutrition care systems, community services and mother support, media promotion, maternity protection, WHO International Code of Marketing of Breastmilk Substitutes implementation, monitoring and research, for optimal breastfeeding practices

  3. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  4. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  5. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  6. Estimation of degree of polymerization of poly-acrylonitrile-grafted carbon nanotubes using Guinier plot of small angle x-ray scattering

    NASA Astrophysics Data System (ADS)

    Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong

    2018-07-01

    Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a ‘grafting from’ method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet–visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the ‘grafting from’ method and to fabricate polymer/CNT composite materials.

  7. Estimation of degree of polymerization of poly-acrylonitrile-grafted carbon nanotubes using Guinier plot of small angle x-ray scattering.

    PubMed

    Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong

    2018-07-06

    Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a 'grafting from' method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g  = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet-visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the 'grafting from' method and to fabricate polymer/CNT composite materials.

  8. Initial attitude determination for the hipparcos satellite

    NASA Astrophysics Data System (ADS)

    Van der Ha, Jozef C.

    The present paper described the strategy and algorithms used during the initial on-ground three-axes attitude determination of ESA's astrometry satellite HIPPARCOS. The estimation is performed using calculated crossing times of identified stars over the Star Mapper's vertical and inclined slit systems as well as outputs from a set of rate-integrating gyros. Valid star transits in either of the two fields of view are expected to occur in average about every 30 s whereas the gyros are sampled at about 1 Hz. The state vector to be estimated consists of the three angles, three rates and three gyro drift rate components. Simulations have shown that convergence of the estimator is established within about 10 min and that the accuracies achieved are in the order of a few arcsec for the angles and a few milliarcsec per s for the rates. These stringent accuracies are in fact required for initialisation of subsequent autonomous on-board real-time attitude determination.

  9. Decentralized state estimation for a large-scale spatially interconnected system.

    PubMed

    Liu, Huabo; Yu, Haisheng

    2018-03-01

    A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Comparison of GPS receiver DCB estimation methods using a GPS network

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Kyu; Park, Jong-Uk; Min Roh, Kyoung; Lee, Sang-Jeong

    2013-07-01

    Two approaches for receiver differential code biases (DCB) estimation using the GPS data obtained from the Korean GPS network (KGN) in South Korea are suggested: the relative and single (absolute) methods. The relative method uses a GPS network, while the single method determines DCBs from a single station only. Their performance was assessed by comparing the receiver DCB values obtained from the relative method with those estimated by the single method. The daily averaged receiver DCBs obtained from the two different approaches showed good agreement for 7 days. The root mean square (RMS) value of those differences is 0.83 nanoseconds (ns). The standard deviation of the receiver DCBs estimated by the relative method was smaller than that of the single method. From these results, it is clear that the relative method can obtain more stable receiver DCBs compared with the single method over a short-term period. Additionally, the comparison between the receiver DCBs obtained by the Korea Astronomy and Space Science Institute (KASI) and those of the IGS Global Ionosphere Maps (GIM) showed a good agreement at 0.3 ns. As the accuracy of DCB values significantly affects the accuracy of ionospheric total electron content (TEC), more studies are needed to ensure the reliability and stability of the estimated receiver DCBs.

  11. Short communication: Development of an equation for estimating methane emissions of dairy cows from milk Fourier transform mid-infrared spectra by using reference data obtained exclusively from respiration chambers.

    PubMed

    Vanlierde, A; Soyeurt, H; Gengler, N; Colinet, F G; Froidmont, E; Kreuzer, M; Grandl, F; Bell, M; Lund, P; Olijhoek, D W; Eugène, M; Martin, C; Kuhla, B; Dehareng, F

    2018-05-09

    Evaluation and mitigation of enteric methane (CH 4 ) emissions from ruminant livestock, in particular from dairy cows, have acquired global importance for sustainable, climate-smart cattle production. Based on CH 4 reference measurements obtained with the SF 6 tracer technique to determine ruminal CH 4 production, a current equation permits evaluation of individual daily CH 4 emissions of dairy cows based on milk Fourier transform mid-infrared (FT-MIR) spectra. However, the respiration chamber (RC) technique is considered to be more accurate than SF 6 to measure CH 4 production from cattle. This study aimed to develop an equation that allows estimating CH 4 emissions of lactating cows recorded in an RC from corresponding milk FT-MIR spectra and to challenge its robustness and relevance through validation processes and its application on a milk spectral database. This would permit confirming the conclusions drawn with the existing equation based on SF 6 reference measurements regarding the potential to estimate daily CH 4 emissions of dairy cows from milk FT-MIR spectra. A total of 584 RC reference CH 4 measurements (mean ± standard deviation of 400 ± 72 g of CH 4 /d) and corresponding standardized milk mid-infrared spectra were obtained from 148 individual lactating cows between 7 and 321 d in milk in 5 European countries (Germany, Switzerland, Denmark, France, and Northern Ireland). The developed equation based on RC measurements showed calibration and cross-validation coefficients of determination of 0.65 and 0.57, respectively, which is lower than those obtained earlier by the equation based on 532 SF 6 measurements (0.74 and 0.70, respectively). This means that the RC-based model is unable to explain the variability observed in the corresponding reference data as well as the SF 6 -based model. The standard errors of calibration and cross-validation were lower for the RC model (43 and 47 g/d vs. 66 and 70 g/d for the SF 6 version, respectively), indicating

  12. Concept Mapping for Planning and Evaluation of a Community-Based Initiative

    ERIC Educational Resources Information Center

    Chiu, Korinne

    2012-01-01

    Community-based initiatives address community issues by providing a multi-agency approach to prevention and intervention services (Connell et al.,1995). When incorporating multiple agencies, it can be challenging to obtain multiple perspectives and gaining consensus on the priorities and direction for these initiatives. This study employed a…

  13. Estimating Renewable Energy Economic Potential in the United States: Methodology and Initial Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Austin; Beiter, Philipp; Heimiller, Donna

    The report describes a geospatial analysis method to estimate the economic potential of several renewable resources available for electricity generation in the United States. Economic potential, one measure of renewable generation potential, is defined in this report as the subset of the available resource technical potential where the cost required to generate the electricity (which determines the minimum revenue requirements for development of the resource) is below the revenue available in terms of displaced energy and displaced capacity.

  14. Effect of Initial Microstructure on Impact Toughness of 1200 MPa-Class High Strength Steel with Ultrafine Elongated Grain Structure

    NASA Astrophysics Data System (ADS)

    Jafari, Meysam; Garrison, Warren M.; Tsuzaki, Kaneaki

    2014-02-01

    A medium-carbon low-alloy steel was prepared with initial structures of either martensite or bainite. For both initial structures, warm caliber-rolling was conducted at 773 K (500 °C) to obtain ultrafine elongated grain (UFEG) structures with strong <110>//rolling direction (RD) fiber deformation textures. The UFEG structures consisted of spheroidal cementite particles distributed uniformly in a ferrite matrix of a transverse grain size of about 331 and 311 nm in samples with initial martensite and bainite structures, respectively. For both initial structures, the UFEG materials had similar tensile properties, upper shelf energy (145 J), and ductile-to-brittle transition temperatures 98 K (500 °C). Obtaining the martensitic structure requires more rapid cooling than is needed to obtain the bainitic structure and this more rapid cooling promote cracking. As the UFEG structures obtained from initial martensitic and bainitic structures have almost identical properties, but obtaining the bainitic structure does not require a rapid cooling which promotes cracking suggests the use of a bainitic structure in obtaining UFEG structures should be examined further.

  15. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  16. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  17. Isolator fragmentation and explosive initiation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Rae, Philip John; Foley, Timothy J.

    2016-09-19

    Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without a barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX 9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimatesmore » demonstrate that even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less

  18. [Cardiovascular risk: initial estimation in the study cohort "CDC of the Canary Islands in Venezuela"].

    PubMed

    Viso, Miguel; Rodríguez, Zulma; Loreto, Neydys; Fernández, Yolima; Callegari, Carlos; Nicita, Graciela; González, Julio; Cabrera de León, Antonio; Reigosa, Aldo

    2011-12-01

    In Venezuela as in the Canary Islands (Spain), cardiovascular disease is a major cause of morbidity and mortality. The purpose of this research is to estimate the cardiovascular risk in the Canary Islands migrants living in Venezuela and participating in the study cohort "CDC of the Canary Islands in Venezuela". 452 individuals, aged 18 to 93 years (54.9% women), were enrolled between June 2008 and August 2009. A data survey was performed and their weight, height, abdomen and hip circumferences, and blood pressure were measured. After a 12-hour fasting period, a blood sample was obtained for glucose and lipid profile determinations. 40.5% of the subjects were over 65 years of age and 8% corresponded to the younger group (18-30 years). In men, the average age was 57.69 +/- 18.17 years and the body mass index 29.39 +/- 5.71 kg/m2, whereas women were 56.50 +/- 16.91 years and 28.20 +/- 5.57 kg/m2, respectively. The prevalence of metabolic syndrome was 49.1%, overweight and obesity together 75,2%, abdominal obesity 85.4%, diabetes 17.4%, impaired fasting glucose (IFG) 12.2%, elevated blood pressure 52.9%, low HDL-cholesterol 53,8% and elevated serum triglycerides 31%. Among subjects without diabetes or IFG, a third showed a high triglycerides/HDL-cholesterol ratio, indicating insulin resistance. We conclude that the Canarian-Venezuelan community suffers high prevalence of cardiovascular risk factors (obesity, abdominal obesity, dyslipidemia, diabetes). In relation to the current population of the Canary Islands, they show a lower frequency of IFG and a higher frequency of low HDL-cholesterol. In comparison to the Venezuelan population (Zulia), they showed to have lower prevalence of IFG, low HDL cholesterol and elevated triglycerides.

  19. Timing of Initiation of Maintenance Dialysis

    PubMed Central

    Wong, Susan P. Y.; Vig, Elizabeth K.; Taylor, Janelle S.; Burrows, Nilka R.; Liu, Chuan-Fen; Williams, Desmond E.; Hebert, Paul L.; O’Hare, Ann M.

    2016-01-01

    IMPORTANCE There is often considerable uncertainty about the optimal time to initiate maintenance dialysis in individual patients and little medical evidence to guide this decision. OBJECTIVE To gain a better understanding of the factors influencing the timing of initiation of dialysis in clinical practice. DESIGN, SETTING, AND PARTICIPANTS A qualitative analysis was conducted using the electronic medical records from the Department of Veterans Affairs (VA) of a national random sample of 1691 patients for whom the decision to initiate maintenance dialysis occurred in the VA between January 1, 2000, and December 31, 2009. Data analysis took place from June 1 to November 30, 2014. MAIN OUTCOMES AND MEASURES Central themes related to the timing of initiation of dialysis as documented in patients’ electronic medical records. RESULTS Of the 1691 patients, 1264 (74.7%) initiated dialysis as inpatients and 1228 (72.6%) initiated dialysis with a hemodialysis catheter. Cohort members met with a nephrologist during an outpatient clinic visit a median of 3 times (interquartile range, 0–6) in the year prior to initiation of dialysis. The mean (SD) estimated glomerular filtration rate at the time of initiation for cohort members was 10.4 (5.7) mL/min/1.73m2. The timing of initiation of dialysis reflected the complex interplay of at least 3 interrelated and dynamic processes. The first was physician practices, which ranged from practices intended to prepare patients for dialysis to those intended to forestall the need for dialysis by managing the signs and symptoms of uremia with medical interventions. The second process was sources of momentum. Initiation of dialysis was often precipitated by clinical events involving acute illness or medical procedures. In these settings, the imperative to treat often seemed to override patient choice. The third process was patient-physician dynamics. Interactions between patients and physicians were sometimes adversarial, and physician

  20. Clinical validation of the General Ability Index--Estimate (GAI-E): estimating premorbid GAI.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Iverson, Grant L; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2006-09-01

    The clinical utility of the General Ability Index--Estimate (GAI-E; Lange, Schoenberg, Chelune, Scott, & Adams, 2005) for estimating premorbid GAI scores was investigated using the WAIS-III standardization clinical trials sample (The Psychological Corporation, 1997). The GAI-E algorithms combine Vocabulary, Information, Matrix Reasoning, and Picture Completion subtest raw scores with demographic variables to predict GAI. Ten GAI-E algorithms were developed combining demographic variables with single subtest scaled scores and with two subtests. Estimated GAI are presented for participants diagnosed with dementia (n = 50), traumatic brain injury (n = 20), Huntington's disease (n = 15), Korsakoff's disease (n = 12), chronic alcohol abuse (n = 32), temporal lobectomy (n = 17), and schizophrenia (n = 44). In addition, a small sample of participants without dementia and diagnosed with depression (n = 32) was used as a clinical comparison group. The GAI-E algorithms provided estimates of GAI that closely approximated scores expected for a healthy adult population. The greatest differences between estimated GAI and obtained GAI were observed for the single subtest GAI-E algorithms using the Vocabulary, Information, and Matrix Reasoning subtests. Based on these data, recommendations for the use of the GAI-E algorithms are presented.

  1. State Estimation for Landing Maneuver on High Performance Aircraft

    NASA Astrophysics Data System (ADS)

    Suresh, P. S.; Sura, Niranjan K.; Shankar, K.

    2018-01-01

    State estimation methods are popular means for validating aerodynamic database on aircraft flight maneuver performance characteristics. In this work, the state estimation method during landing maneuver is explored for the first of its kind, using upper diagonal adaptive extended Kalman filter (UD-AEKF) with fuzzy based adaptive tunning of process noise matrix. The mathematical model for symmetrical landing maneuver consists of non-linear flight mechanics equation representing Aircraft longitudinal dynamics. The UD-AEKF algorithm is implemented in MATLAB environment and the states with bias is considered to be the initial conditions just prior to the flare. The measurement data is obtained from a non-linear 6 DOF pilot in loop simulation using FORTRAN. These simulated measurement data is additively mixed with process and measurement noises, which are used as an input for UD-AEKF. Then, the governing states that dictate the landing loads at the instant of touch down are compared. The method is verified using flight data wherein, the vertical acceleration at the aircraft center of gravity (CG) is compared. Two possible outcome of purely relying on the aircraft measured data is highlighted. It is observed that, with the implementation of adaptive fuzzy logic based extended Kalman filter tuned to adapt for aircraft landing dynamics, the methodology improves the data quality of the states that are sourced from noisy measurements.

  2. How and Where Young Adults Obtain Marijuana. The NSDUH Report. Issue 20

    ERIC Educational Resources Information Center

    Substance Abuse and Mental Health Services Administration, 2006

    2006-01-01

    The National Survey on Drug Use and Health (NSDUH) asks persons aged 12 or older about their use of marijuana or hashish in the past year, including their frequency of use. This report focuses on how and where past year marijuana users aged 18 to 25 obtained their most recently used marijuana. Findings include estimates from the combined 2002,…

  3. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  4. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  5. Building upon the Great Waters Initiative: Scoping study for potential polyaromatic hydrocarbon deposition into San Diego Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koehler, J.; Sylte, W.W.

    1997-12-31

    The deposition of atmospheric polyaromatic hydrocarbons (PAHs) into San Diego Bay was evaluated at an initial study level. This study was part of an overall initial estimate of PAH waste loading to San Diego Bay from all environmental pathways. The study of air pollutant deposition to water bodies has gained increased attention both as a component of Total Maximum Daily Load (TMDL) determinations required under the Clean Water Act and pursuant to federal funding authorized by the 1990 Clean Air Act Amendments to study the atmospheric deposition of hazardous air pollutants to the Great Waters, which includes coastal waters. Tomore » date, studies under the Clean Air Act have included the Great Lakes, Chesapeake Bay, Lake Champlain, and Delaware Bay. Given the limited resources of this initial study for San Diego Bay, the focus was on maximizing the use of existing data and information. The approach developed included the statistical evaluation of measured atmospheric PAH concentrations in the San Diego area, the extrapolation of EPA study results of atmospheric PAH concentrations above Lake Michigan to supplement the San Diego data, the estimation of dry and wet deposition with published calculation methods considering local wind and rainfall data, and the comparison of resulting PAH deposition estimates for San Diego Bay with estimated PAH emissions from ship and commercial boat activity in the San Diego area. The resulting PAH deposition and ship emission estimates were within the same order of magnitude. Since a significant contributor to the atmospheric deposition of PAHs to the Bay is expected to be from shipping traffic, this result provides a check on the order of magnitude on the PAH deposition estimate. Also, when compared against initial estimates of PAH loading to San Diego Bay from other environmental pathways, the atmospheric deposition pathway appears to be a significant contributor.« less

  6. Estimating Renewable Energy Economic Potential in the United States. Methodology and Initial Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Austin; Beiter, Philipp; Heimiller, Donna

    This report describes a geospatial analysis method to estimate the economic potential of several renewable resources available for electricity generation in the United States. Economic potential, one measure of renewable generation potential, may be defined in several ways. For example, one definition might be expected revenues (based on local market prices) minus generation costs, considered over the expected lifetime of the generation asset. Another definition might be generation costs relative to a benchmark (e.g., a natural gas combined cycle plant) using assumptions of fuel prices, capital cost, and plant efficiency. Economic potential in this report is defined as the subsetmore » of the available resource technical potential where the cost required to generate the electricity (which determines the minimum revenue requirements for development of the resource) is below the revenue available in terms of displaced energy and displaced capacity. The assessment is conducted at a high geospatial resolution (more than 150,000 technology-specific sites in the continental United States) to capture the significant variation in local resource, costs, and revenue potential. This metric can be a useful screening factor for understanding the economic viability of renewable generation technologies at a specific location. In contrast to many common estimates of renewable energy potential, economic potential does not consider market dynamics, customer demand, or most policy drivers that may incent renewable energy generation.« less

  7. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  8. Phase composition and microstructure of WC-Co alloys obtained by selective laser melting

    NASA Astrophysics Data System (ADS)

    Khmyrov, Roman S.; Shevchukov, Alexandr P.; Gusarov, Andrey V.; Tarasova, Tatyana V.

    2018-03-01

    Phase composition and microstructure of initial WC, BK8 (powder alloy 92 wt.% WC-8 wt.% Co), Co powders, ball-milled powders with four different compositions (1) 25 wt.% WC-75 wt.% Co, (2) 30 wt.% BK8-70 wt.% Co, (3) 50 wt.% WC-50 wt.% Co, (4) 94 wt.% WC-6 wt.% Co, and bulk alloys obtained by selective laser melting (SLM) from as-milled powders in as-melted state and after heat treatment were investigated by scanning electron microscopy and X-ray diffraction analysis. Initial and ball-milled powders consist of WC, hexagonal α-Co and face-centered cubic β-Co. The SLM leads to the formation of major new phases W3Co3C, W4Co2C and face-centered cubic β-Co-based solid solution. During the heat treatment, there occurs partial decomposition of the face-centered cubic β-Co-based solid solution with the formation of W2C and hexagonal α-Co solid solution. The microstructure of obtained bulk samples, in general, corresponds to the observed phase composition.

  9. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  10. Estimation of Solar Radiation on Building Roofs in Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Stevanato, G.; De Filippi, R.; Furlanello, C.

    2011-04-01

    The aim of this study is estimating solar radiation on building roofs in complex mountain landscape areas. A multi-scale solar radiation estimation methodology is proposed that combines 3D data ranging from regional scale to the architectural one. Both the terrain and the nearby building shadowing effects are considered. The approach is modular and several alternative roof models, obtained by surveying and modelling techniques at varying level of detail, can be embedded in a DTM, e.g. that of an Alpine valley surrounded by mountains. The solar radiation maps obtained from raster models at different resolutions are compared and evaluated in order to obtain information regarding the benefits and disadvantages tied to each roof modelling approach. The solar radiation estimation is performed within the open-source GRASS GIS environment using r.sun and its ancillary modules.

  11. SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION

    EPA Science Inventory

    Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...

  12. Initial eccentricity and constituent quark number scaling of elliptic flow in ideal and viscous dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhuri, A. K.

    2010-04-15

    In the Israel-Stewart theory of dissipative hydrodynamics, the scaling properties of elliptic flow in Au+Au collisions are studied. The initial energy density of the fluid was fixed to reproduce STAR data on phi-meson multiplicity in 0-5% Au+Au collisions such that, irrespective of fluid viscosity, entropy at the freeze-out is similar in ideal or in viscous evolution. The initial eccentricity or constituent quark number scaling is only approximate in ideal or minimally viscous (eta/s=1/4pi) fluid. Eccentricity scaling becomes nearly exact in more viscous fluid (eta/s>=0.12). However, in more viscous fluid, constituent quark number scaled elliptic flow for mesons and baryons splitsmore » into separate scaling functions. Simulated flows also do not exhibit 'universal scaling'; that is, elliptic flow scaled by the constituent quark number and charged particles v{sub 2} is not a single function of transverse kinetic energy scaled by the quark number. From a study of the violation of universal scaling, we obtain an estimate of quark-gluon plasma viscosity, eta/s=0.12+-0.03. The error is statistical only. The systematic error in eta/s could be as large.« less

  13. Method for obtaining structure and interactions from oriented lipid bilayers

    PubMed Central

    Lyatskaya, Yulia; Liu, Yufeng; Tristram-Nagle, Stephanie; Katsaras, John; Nagle, John F.

    2009-01-01

    Precise calculations are made of the scattering intensity I(q) from an oriented stack of lipid bilayers using a realistic model of fluctuations. The quantities of interest include the bilayer bending modulus Kc , the interbilayer interaction modulus B, and bilayer structure through the form factor F(qz). It is shown how Kc and B may be obtained from data at large qz where fluctuations dominate. Good estimates of F(qz) can be made over wide ranges of qz by using I(q) in q regions away from the peaks and for qr≠0 where details of the scattering domains play little role. Rough estimates of domain sizes can also be made from smaller qz data. Results are presented for data taken on fully hydrated, oriented DOPC bilayers in the Lα phase. These results illustrate the advantages of oriented samples compared to powder samples. PMID:11304287

  14. Estimated use of water in Lincoln County, Wyoming, 1993

    USGS Publications Warehouse

    Ogle, K.M.; Eddy-Miller, C. A.; Busing, C.J.

    1996-01-01

    Total water use in Lincoln County, Wyoming in 1993 was estimated to be 405,000 Mgal (million gallons). Water use estimates were divided into nine categories: public supply, self-supplied domestic, commercial, irrigation, livestock, indus ial, mining, thermoelectric power, and hydro- electric power. Public supply water use, estimated to be 2,160 Mgal, primarily was obtained from springs and wells. Shallow ground water wells were the primary source of self-supplied domestic water, estimate to be 1.7 Mgal, and 53 percent of those wells were drilled to a depth of 100 feet or less. Commercial water use, estimated to be 117 Mgal, was obtained from public-supply systems. Surface water supplied an estimated 153,000 Mgal of the total estimated water use of 158,000 Mgal for irrigation in 1993. Sprinkler and flood irrigation technology were used about equally in the northern part of Lincoln County and flood irrigation was the primary technology used in the southern part. Livestock, industrial, and mining were not major water users in Lincoln County in 1993. Livestock water use totaled an estimated 203 Mgal. Industrial water use was estimated to be 120 Mgal from self-supplied water sources and 27 Mgal from public supplied water source Mining water use was an estimated 153 Mgal. Thermoelectric and hydroelectric power generation used surface water sources. Thermoelectric power water use was an estimated 5,900 Mgal. An estimated 238,000 Mgal of water was used to generate hydroelectc power at Fontenelle Reservoir on the Green River.

  15. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  16. Cancer Related-Knowledge - Small Area Estimates

    Cancer.gov

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  17. Three-dimensional reconstruction for coherent diffraction patterns obtained by XFEL.

    PubMed

    Nakano, Miki; Miyashita, Osamu; Jonic, Slavica; Song, Changyong; Nam, Daewoong; Joti, Yasumasa; Tama, Florence

    2017-07-01

    The three-dimensional (3D) structural analysis of single particles using an X-ray free-electron laser (XFEL) is a new structural biology technique that enables observations of molecules that are difficult to crystallize, such as flexible biomolecular complexes and living tissue in the state close to physiological conditions. In order to restore the 3D structure from the diffraction patterns obtained by the XFEL, computational algorithms are necessary as the orientation of the incident beam with respect to the sample needs to be estimated. A program package for XFEL single-particle analysis based on the Xmipp software package, that is commonly used for image processing in 3D cryo-electron microscopy, has been developed. The reconstruction program has been tested using diffraction patterns of an aerosol nanoparticle obtained by tomographic coherent X-ray diffraction microscopy.

  18. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  19. Effect of initial conditions and of intra-event rainfall intensity variability on shallow landslide triggering return period

    NASA Astrophysics Data System (ADS)

    Peres, David Johnny; Cancelliere, Antonino

    2016-04-01

    in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.

  20. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  1. Fully decentralized estimation and control for a modular wheeled mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mutambara, A.G.O.; Durrant-Whyte, H.F.

    2000-06-01

    In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less

  2. GFR at Initiation of Dialysis and Mortality in CKD: A Meta-analysis

    PubMed Central

    Susantitaphong, Paweena; Altamimi, Sarah; Ashkar, Motaz; Balk, Ethan M.; Stel, Vianda S.; Wright, Seth; Jaber, Bertrand L.

    2012-01-01

    Background The proportion of patients with advanced chronic kidney disease (CKD) initiating dialysis at higher glomerular filtration rate (GFR) has increased over the past decade. Recent data suggest that higher GFR may be associated with increased mortality. Study Design A meta-analysis of cohort studies and trials. Setting & Population Patients with advanced CKD. Selection Criteria for Studies We performed a systematic literature search in MEDLINE, Cochrane Central Register of Controlled Trials, ClinicalTrials.gov, American Society of Nephrology abstracts, and bibliographies of retrieved articles to identify studies reporting on GFR at dialysis initiation and mortality. Predictor estimated or calculated GFR at dialysis initiation. Outcome Pooled adjusted hazard ratio (HR) of continuous GFR for all-cause mortality. Results Sixteen cohort studies and one randomized controlled trial were identified (n=1,081,116). By meta-analysis, restricted to the 15 cohorts (n=1,079,917), higher GFR at dialysis initiation was associated with a higher pooled adjusted HR for all-cause mortality (1.04; 95% CI, 1.03–1.05; P<0.001). However, there was significant heterogeneity (I2=97%; P<0.001). The association persisted among the 9 cohorts that adjusted analytically for nutritional covariates (HR 1.03; 95% CI 1.02, 1.04; P<0.001; residual I2=97%). The highest mortality risk was observed in hemodialysis cohorts (HR 1.05; 95% CI 1.02, 1.08; P<0.001) whereas there was no association between GFR and mortality in peritoneal dialysis cohorts (HR 1.04; 95% CI 0.99, 1.08, P=0.11; residual I2=98%). Finally, higher GFR was associated with a lower mortality risk in cohorts that calculated GFR (HR 0.80; 95% CI 0.71, 0.91; P=0.003), contrasting with a higher mortality risk in cohorts that estimated GFR (HR 1.04; 95% CI 1.03, 1.05; P<0.001; residual I2=97%). Limitations Paucity of randomized controlled trials; different methods for determining GFR; and substantial heterogeneity. Conclusions

  3. Near surface water content estimation using GPR data: investigations within California vineyards

    NASA Astrophysics Data System (ADS)

    Hubbard, S.; Grote, K.; Lunt, I.; Rubin, Y.

    2003-04-01

    Detailed estimates of water content are necessary for variety of hydrogeological investigations. In viticulture applications, this information is particularly useful for assisting the design of both vineyard layout and efficient irrigation/agrochemical application. However, it is difficult to obtain sufficient information about the spatial variation of water content within the root zone using conventional point or wellbore measurements. We have investigated the applicability of ground penetrating radar (GPR) methods to estimate near surface water content within two California vineyard study sites: the Robert Mondavi Vineyard in Napa County and the Dehlinger Vineyard within Sonoma County. Our research at the winery study sites involves assessing the feasibility of obtaining accurate, non-invasive and dense estimates of water content and the changes in water content over space and time using both groundwave and reflected GPR events. We will present the spatial and temporal estimates of water content obtained from the GPR data at both sites. We will compare our estimates with conventional measurements of water content (obtained using gravimetric, TDR, and neutron probe techniques) as well as with soil texture and plant vigor measurements. Through these comparisons, we will illustrate the potential of GPR for providing reliable and spatially dense water content estimates and the linkages between water content, soil properties and ecosystem responses at the two study sites.

  4. Cardiopulmonary Toxicity of Size-Fractionated Particulate Matter Obtained at Different Distances from a Highway

    EPA Science Inventory

    This study was initiated to determine the effect of size fractionated particulate matter (PM) obtained at different distances from a highway on acute cardiopulmonary toxicity in mice. PM was collected for 2 weeks using a three-stage (ultrafine: <0.1µm; fine: 0.1-2.5µm; and coarse...

  5. A straightforward frequency-estimation technique for GPS carrier-phase time transfer.

    PubMed

    Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen

    2006-09-01

    Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).

  6. Using satellite image data to estimate soil moisture

    NASA Astrophysics Data System (ADS)

    Chuang, Chi-Hung; Yu, Hwa-Lung

    2017-04-01

    Soil moisture is considered as an important parameter in various study fields, such as hydrology, phenology, and agriculture. In hydrology, soil moisture is an significant parameter to decide how much rainfall that will infiltrate into permeable layer and become groundwater resource. Although soil moisture is a critical role in many environmental studies, so far the measurement of soil moisture is using ground instrument such as electromagnetic soil moisture sensor. Use of ground instrumentation can directly obtain the information, but the instrument needs maintenance and consume manpower to operation. If we need wide range region information, ground instrumentation probably is not suitable. To measure wide region soil moisture information, we need other method to achieve this purpose. Satellite remote sensing techniques can obtain satellite image on Earth, this can be a way to solve the spatial restriction on instrument measurement. In this study, we used MODIS data to retrieve daily soil moisture pattern estimation, i.e., crop water stress index (cwsi), over the year of 2015. The estimations are compared with the observations at the soil moisture stations from Taiwan Bureau of soil and water conservation. Results show that the satellite remote sensing data can be helpful to the soil moisture estimation. Further analysis can be required to obtain the optimal parameters for soil moisture estimation in Taiwan.

  7. WHE-PAGER Project: A new initiative in estimating global building inventory and its seismic vulnerability

    USGS Publications Warehouse

    Porter, K.A.; Jaiswal, K.S.; Wald, D.J.; Greene, M.; Comartin, Craig

    2008-01-01

    The U.S. Geological Survey’s Prompt Assessment of Global Earthquake’s Response (PAGER) Project and the Earthquake Engineering Research Institute’s World Housing Encyclopedia (WHE) are creating a global database of building stocks and their earthquake vulnerability. The WHE already represents a growing, community-developed public database of global housing and its detailed structural characteristics. It currently contains more than 135 reports on particular housing types in 40 countries. The WHE-PAGER effort extends the WHE in several ways: (1) by addressing non-residential construction; (2) by quantifying the prevalence of each building type in both rural and urban areas; (3) by addressing day and night occupancy patterns, (4) by adding quantitative vulnerability estimates from judgment or statistical observation; and (5) by analytically deriving alternative vulnerability estimates using in part laboratory testing.

  8. Estimates of the initial vortex separation distance, bo, of commercial aircraft from pulsed lidar data

    DOT National Transportation Integrated Search

    2013-01-07

    An aircraft in flight generates multiple wake vortices, the largest of which are a result of : the lift on the wings. These vortices rapidly roll up into a counter-rotating vortex pair : behind the aircraft. The initial separation between the centroi...

  9. Area estimation of crops by digital analysis of Landsat data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Hixson, M. M.; Davis, B. J.

    1978-01-01

    The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.

  10. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  11. Online Wavelet Complementary velocity Estimator.

    PubMed

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Genital Herpes - Initial Visits to Physicians' Offices, United States, 1966-2012

    MedlinePlus

    ... Archive Data & Statistics Sexually Transmitted Diseases Figure 48. Genital Herpes — Initial Visits to Physicians’ Offices, United States, 1966 – ... Statistics page . NOTE : The relative standard errors for genital herpes estimates of more than 100,000 range from ...

  13. A semisupervised support vector regression method to estimate biophysical parameters from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo

    2014-10-01

    This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.

  14. Estimating atmospheric parameters and reducing noise for multispectral imaging

    DOEpatents

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  15. Comparison of in vivo vs. ex situ obtained material properties of sheep common carotid artery.

    PubMed

    Smoljkić, Marija; Verbrugghe, Peter; Larsson, Matilda; Widman, Erik; Fehervary, Heleen; D'hooge, Jan; Vander Sloten, Jos; Famaey, Nele

    2018-05-01

    Patient-specific biomechanical modelling can improve preoperative surgical planning. This requires patient-specific geometry as well as patient-specific material properties as input. The latter are, however, still quite challenging to estimate in vivo. This study focuses on the estimation of the mechanical properties of the arterial wall. Firstly, in vivo pressure, diameter and thickness of the arterial wall were acquired for sheep common carotid arteries. Next, the animals were sacrificed and the tissue was stored for mechanical testing. Planar biaxial tests were performed to obtain experimental stress-stretch curves. Finally, parameters for the hyperelastic Mooney-Rivlin and Gasser-Ogden-Holzapfel (GOH) material model were estimated based on the in vivo obtained pressure-diameter data as well as on the ex situ experimental stress-stretch curves. Both material models were able to capture the in vivo behaviour of the tissue. However, in the ex situ case only the GOH model provided satisfactory results. When comparing different fitting approaches, in vivo vs. ex situ, each of them showed its own advantages and disadvantages. The in vivo approach estimates the properties of the tissue in its physiological state while the ex situ approach allows to apply different loadings to properly capture the anisotropy of the tissue. Both of them could be further enhanced by improving the estimation of the stress-free state, i.e. by adding residual circumferential stresses in vivo and by accounting for the flattening effect of the tested samples ex vivo. • Competing interests: none declared • Word count: 4716. Copyright © 2018. Published by Elsevier Ltd.

  16. ALGORITHM BASED ON ARTIFICIAL BEE COLONY FOR UNFOLDING OF NEUTRON SPECTRA OBTAINED WITH BONNER SPHERES.

    PubMed

    Silva, Everton R; Freitas, Bruno M; Santos, Denison S; Maurício, Cláudia L P

    2018-04-13

    Occupational neutron fields usually have energies from the thermal range to some MeV and the characterization of the spectra is essential for estimation of the radioprotection quantities. Thus, the spectrum must be unfolded based on a limited number of measurements. This study implemented an algorithm based on the bee colonies behavior, named Artificial Bee Colony (ABC), where the intelligent behavior of the bees in search of food is reproduced to perform the unfolding of neutron spectra. The experimental measurements used Bonner spheres and 6LiI (Eu) detector, with irradiations using a thermal neutron flux and three reference fields: 241Am-Be, 252Cf and 252Cf (D2O). The ABC obtained good estimation of the expected spectrum even without previous information and its results were closer to expected spectra than those obtained by the SPUNIT algorithm.

  17. The effect of initial cell concentration on xylose fermentation by Pichia stipitis

    Treesearch

    Frank K. Agbogbo; Guillermo Coward-Kelly; Mads Torry-Smith; Kevin Wenger; Thomas W. Jeffries

    2007-01-01

    Xylose was fermented using Pichia stipitis CBS 6054 at different initial cell concentrations. A high initial cell concentration increased the rate of xylose utilization, ethanol formation, and the ethanol yield. The highest ethanol concentration of 41.0 g/L and a yield of 0.38 g/g was obtained using an initial cell concentration of 6.5 g/L. Even though more xylitol was...

  18. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  19. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  20. Portfolios for determining initial licensure competency.

    PubMed

    Chambers, David W

    2004-02-01

    Because attempts to improve initial licensure examinations have not been grounded in measurement theory, partial and inadequate remedies have led to a cycle of refutations, defenses and political polarization. The author reviewed the psychometric literature, focusing on high-stakes professional decisions. Editorials in the dental literature and position papers of involved organizations often use words from this literature without incorporating its fundamental concepts. The reliability of one-shot initial licensure examinations is estimated to be approximately r = .40, which is a value well under the standard for such tests in other professions. Validity has not been investigated rigorously, but the one-shot format and proposals to remove live patients certainly would reduce validity. The use of portfolios--a small number of evaluations in several realistic task domains--is a viable means of achieving psychometric standards for initial licensure decisions. Boards are charged with making valid and reliable licensure decisions, not with conducting examinations. At a minimum, they must define the competencies of beginning practitioners and establish the psychometric criteria for their decisions (neither of which are done currently). Gathering data then can be delegated to whoever is best qualified to meet these standards.

  1. Satellite Angular Rate Estimation From Vector Measurements

    NASA Technical Reports Server (NTRS)

    Azor, Ruth; Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    1996-01-01

    This paper presents an algorithm for estimating the angular rate vector of a satellite which is based on the time derivatives of vector measurements expressed in a reference and body coordinate. The computed derivatives are fed into a spacial Kalman filter which yields an estimate of the spacecraft angular velocity. The filter, named Extended Interlaced Kalman Filter (EIKF), is an extension of the Kalman filter which, although being linear, estimates the state of a nonlinear dynamic system. It consists of two or three parallel Kalman filters whose individual estimates are fed to one another and are considered as known inputs by the other parallel filter(s). The nonlinear dynamics stem from the nonlinear differential equation that describes the rotation of a three dimensional body. Initial results, using simulated data, and real Rossi X ray Timing Explorer (RXTE) data indicate that the algorithm is efficient and robust.

  2. Investigation of the SCS-CN initial abstraction ratio using a Monte Carlo simulation for the derived flood frequency curves

    NASA Astrophysics Data System (ADS)

    Caporali, E.; Chiarello, V.; Galeati, G.

    2014-12-01

    Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful

  3. Students' Personal Initiative towards Their Speaking Performance

    ERIC Educational Resources Information Center

    Liando, Nihta V. F.; Lumettu, Raesita

    2017-01-01

    This research aims at finding out students' personal initiative towards their achievement in speaking English. This research was conducted in an English department at a university in North Sulawesi Indonesia. The data were obtained from the sixth semester students in English Language and Literature study program of academic year 2015/2016…

  4. An initial model for estimating soybean development stages from spectral data

    NASA Technical Reports Server (NTRS)

    Henderson, K. E.; Badhwar, G. D.

    1982-01-01

    A model, utilizing a direct relationship between remotely sensed spectral data and soybean development stage, has been proposed. The model is based upon transforming the spectral data in Landsat bands to greenness values over time and relating the area of this curve to soybean development stage. Soybean development stages were estimated from data acquired in 1978 from research plots at the Purdue University Agronomy Farm as well as Landsat data acquired over sample areas of the U.S. Corn Belt in 1978 and 1979. Analysis of spectral data from research plots revealed that the model works well with reasonable variation in planting date, row spacing, and soil background. The R-squared of calculated U.S. observed development stage exceeded 0.91 for all treatment variables. Using Landsat data the calculated U.S. observed development stage gave an R-squared of 0.89 in 1978 and 0.87 in 1979. No difference in the models performance could be detected between early and late planted fields, small and large fields, or high and low yielding fields.

  5. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  6. Population estimates of Nearctic shorebirds

    USGS Publications Warehouse

    Morrison, R.I.G.; Gill, Robert E.; Harrington, B.A.; Skagen, S.K.; Page, G.W.; Gratto-Trevor, C. L.; Haig, S.M.

    2000-01-01

    Estimates are presented for the population sizes of 53 species of Nearctic shorebirds occurring regularly in North America, plus four species that breed occasionally. Shorebird population sizes were derived from data obtained by a variety of methods from breeding, migration and wintering areas, and formal assessments of accuracy of counts or estimates are rarely available. Accurate estimates exist only for a few species that have been the subject of detailed investigation, and the likely accuracy of most estimates is considered poor or low. Population estimates range from a few tens to several millions. Overall, population estimates most commonly fell in the range of hundreds of thousands, particularly the low hundreds of thousands; estimated population sizes for large shorebird species currently all fall below 500,000. Population size was inversely related to size (mass) of the species, with a statistically significant negative regression between log (population size) and log (mass). Two outlying groups were evident on the regression graph: one, with populations lower than predicted, included species considered either to be "at risk" or particularly hard to count, and a second, with populations higher than predicted, included two species that are hunted. Population estimates are an integral part of conservation plans being developed for shorebirds in the United States and Canada, and may be used to identify areas of key international and regional importance.

  7. A History of Ashes: An 80 Year Comparative Portrait of Smoking Initiation in American Indians and Non-Hispanic Whites—the Strong Heart Study

    PubMed Central

    Orr, Raymond; Calhoun, Darren; Noonan, Carolyn; Whitener, Ron; Henderson, Jeff; Goldberg, Jack; Henderson, Patrica Nez

    2013-01-01

    The consequences of starting smoking by age 18 are significant. Early smoking initiation is associated with higher tobacco dependence, increased difficulty in smoking cessation and more negative health outcomes. The purpose of this study is to examine how closely smoking initiation in a well-defined population of American Indians (AI) resembles a group of Non-Hispanic white (NHW) populations born over an 80 year period. We obtained data on age of smoking initiation among 7,073 AIs who were members of 13 tribes in Arizona, Oklahoma and North and South Dakota from the 1988 Strong Heart Study (SHS) and the 2001 Strong Heart Family Study (SHFS) and 19,747 NHW participants in the 2003 National Health Interview Survey. The participants were born as early as 1904 and as late as 1985. We classified participants according to birth cohort by decade, sex, and for AIs, according to location. We estimated the cumulative incidence of smoking initiation by age 18 in each sex and birth cohort group in both AIs and NHWs and used Cox regression to estimate hazard ratios for the association of birth cohort, sex and region with the age at smoking initiation. We found that the cumulative incidence of smoking initiation by age 18 was higher in males than females in all SHS regions and in NHWs (p < 0.001). Our results show regional variation of age of initiation significant in the SHS (p < 0.001). Our data showed that not all AIs (in this sample) showed similar trends toward increased earlier smoking. For instance, Oklahoma SHS male participants born in the 1980s initiated smoking before age 18 less often than those born before 1920 by a ratio of 0.7. The results showed significant variation in age of initiation across sex, birth cohort, and location. Our preliminary analyses suggest that AI smoking trends are not uniform across region or gender but are likely shaped by local context. If tobacco prevention and control programs depend in part on addressing the origin of AI smoking it may

  8. Growth and yield predictions for upland oak stands. 10 years after initial thinning

    Treesearch

    Martin E. Dale; Martin E. Dale

    1972-01-01

    The purpose of this paper is to furnish part of the needed information, that is, quantitative estimates of growth and yield 10 years after initial thinning of upland oak stands. All estimates are computed from a system of equations. These predictions are presented here in tabular form for convenient visual inspection of growth and yield trends. The tables show growth...

  9. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    PubMed

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  10. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances

  12. Quantifying human decomposition in an indoor setting and implications for postmortem interval estimation.

    PubMed

    Ceciliason, Ann-Sofie; Andersson, M Gunnar; Lindström, Anders; Sandler, Håkan

    2018-02-01

    This study's objective is to obtain accuracy and precision in estimating the postmortem interval (PMI) for decomposing human remains discovered in indoor settings. Data were collected prospectively from 140 forensic cases with a known date of death, scored according to the Total Body Score (TBS) scale at the post-mortem examination. In our model setting, it is estimated that, in cases with or without the presence of blowfly larvae, approximately 45% or 66% respectively, of the variance in TBS can be derived from Accumulated Degree-Days (ADD). The precision in estimating ADD/PMI from TBS is, in our setting, moderate to low. However, dividing the cases into defined subgroups suggests the possibility to increase the precision of the model. Our findings also suggest a significant seasonal difference with concomitant influence on TBS in the complete data set, possibly initiated by the presence of insect activity mainly during summer. PMI may be underestimated in cases with presence of desiccation. Likewise, there is a need for evaluating the effect of insect activity, to avoid overestimating the PMI. Our data sample indicates that the scoring method might need to be slightly modified to better reflect indoor decomposition, especially in cases with insect infestations or/and extensive desiccation. When applying TBS in an indoor setting, the model requires distinct inclusion criteria and a defined population. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Preoperative TRAM free flap volume estimation for breast reconstruction in lean patients.

    PubMed

    Minn, Kyung Won; Hong, Ki Yong; Lee, Sang Woo

    2010-04-01

    To obtain pleasing symmetry in breast reconstruction with transverse rectus abdominis myocutaneous (TRAM) free flap, a large amount of abdominal flap is elevated and remnant tissue is trimmed in most cases. However, elevation of abundant abdominal flap can cause excessive tension in donor site closure and increase the possibility of hypertrophic scarring especially in lean patients. The TRAM flap was divided into 4 zones in routine manner; the depth and dimension of the 4 zones were obtained using ultrasound and AutoCAD (Autodesk Inc., San Rafael, CA), respectively. The acquired numbers were then multiplied to obtain an estimate of volume of each zone and the each zone volume was added. To confirm the relation between the estimated volume and the actual volume, authors compared intraoperative actual TRAM flap volumes with preoperative estimated volumes in 30 consecutive TRAM free flap breast reconstructions. The estimated volumes and the actual elevated volumes of flap were found to be correlated by regression analysis (r = 0.9258, P < 0.01). According to this result, we could confirm the reliability of the preoperative volume estimation using our method. Afterward, the authors applied this method to 7 lean patients by estimation and revision of the design and obtained symmetric results with minimal donor site morbidity. Preoperative estimation of TRAM flap volume with ultrasound and AutoCAD (Autodesk Inc.) allow the authors to attain the precise volume desired for elevation. This method provides advantages in terms of minimal flap trimming, easier closure of donor sites, reduced scar widening and symmetry, especially in lean patients.

  14. The Least-Squares Estimation of Latent Trait Variables.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi

    This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…

  15. Estimating Forest Canopy Heights and Aboveground Biomass with Simulated ICESat-2 Data

    NASA Astrophysics Data System (ADS)

    Malambo, L.; Narine, L.; Popescu, S. C.; Neuenschwander, A. L.; Sheridan, R.

    2016-12-01

    The Ice, Cloud and Land Elevation Satellite (ICESat) 2 is scheduled for launch in 2017 and one of its overall science objectives will be to measure vegetation heights, which can be used to estimate and monitor aboveground biomass (AGB) over large spatial scales. This study serves to develop a methodology for utilizing vegetation data collected by ICESat-2 that will be on a five-year mission from 2017, for mapping forest canopy heights and estimating aboveground forest biomass (AGB). The specific objectives are to, (1) simulate ICESat-2 photon-counting lidar (PCL) data, (2) utilize simulated PCL data to estimate forest canopy heights and propose a methodology for upscaling PCL height measurements to obtain spatially contiguous coverage and, (3) estimate and map AGB using simulated PCL data. The laser pulse from ICESat-2 will be divided into three pairs of beams spaced approximately 3 km apart, with footprints measuring approximately 14 m in diameter and with 70 cm along-track intervals. Using existing airborne lidar data (ALS) for Sam Houston National Forest (SHNF) and known ICESat-2 beam locations, footprints are generated along beam locations and PCL data are then simulated from discrete return lidar points within each footprint. By applying data processing algorithms, photons are classified into top of canopy points and ground surface elevation points to yield tree canopy height values within each ICESat-2 footprint. AGB is then estimated using simple linear regression that utilizes AGB from a biomass map generated with ALS data for SHNF and simulated PCL height metrics for 100 m segments along ICESat-2 tracks. Two approaches also investigated for upscaling AGB estimates to provide wall-to-wall coverage of AGB are (1) co-kriging and (2) Random Forest. Height and AGB maps, which are the outcomes of this study, will demonstrate how data acquired by ICESat-2 can be used to measure forest parameters and in extension, estimate forest carbon for climate change

  16. Initiator system in holographic photopolymer materials

    NASA Astrophysics Data System (ADS)

    Ortuño, M.; Fernández, E.; Fuentes, R.; Gallego, S.; Márquez, A.

    2010-05-01

    The photopolymers with a hydrophilic matrix as poly(vinyl alcohol), PVA, are versatile holographic recording materials in hologram recording experiments. They use water as solvent and they can be made in layers with several thickness. One of the photopolymers more used is composed of acrylamide as polymerizable monomer, PVA and water as binder. The pair: triethanolamine, TEA, and the dye yellowish eosin, YE, is widely used as initiator system due to its high sensitivity and efficiency. TEA is the radical initiator more used with dyes derived from fluorescein as YE because they can generate a radical by redox reaction under dye excitation by light. The dye is bleached in this process because is decomposed in the photoinitiation reaction. The ethylenediaminetetraacetic acid EDTA has a molecular structure very similar to TEA and therefore could replace it in this kind of photopolymers. The 4,4' azo-bis-(4-cyanopentanoic acid), ACPA, is a radical initiator that is soluble in water and usually used in polymerizations in solution with thermal initiation. In this work, we use EDTA and ACPA in order to check their properties as radical initiator in the photochemical reaction that takes place inside the photopolymer while a hologram is being recorded. We will compare the results obtained with those derived from TEA and will evaluate the possibilities for these substances.

  17. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  18. A model for the prediction of latent errors using data obtained during the development process

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Martello, S. J.

    1984-01-01

    A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).

  19. Verification of unfold error estimates in the unfold operator code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less

  20. Uncertainties and Systematic Effects on the estimate of stellar masses in high z galaxies

    NASA Astrophysics Data System (ADS)

    Salimbeni, S.; Fontana, A.; Giallongo, E.; Grazian, A.; Menci, N.; Pentericci, L.; Santini, P.

    2009-05-01

    We discuss the uncertainties and the systematic effects that exist in the estimates of the stellar masses of high redshift galaxies, using broad band photometry, and how they affect the deduced galaxy stellar mass function. We use at this purpose the latest version of the GOODS-MUSIC catalog. In particular, we discuss the impact of different synthetic models, of the assumed initial mass function and of the selection band. Using Chariot & Bruzual 2007 and Maraston 2005 models we find masses lower than those obtained from Bruzual & Chariot 2003 models. In addition, we find a slight trend as a function of the mass itself comparing these two mass determinations with that from Bruzual & Chariot 2003 models. As consequence, the derived galaxy stellar mass functions show diverse shapes, and their slope depends on the assumed models. Despite these differences, the overall results and scenario is observed in all these cases. The masses obtained with the assumption of the Chabrier initial mass function are in average 0.24 dex lower than those from the Salpeter assumption, at all redshifts, causing a shift of galaxy stellar mass function of the same amount. Finally, using a 4.5 μm-selected sample instead of a Ks-selected one, we add a new population of highly absorbed, dusty galaxies at z~=2-3 of relatively low masses, yielding stronger constraints on the slope of the galaxy stellar mass function at lower masses.