Reevaluation of mid-Pliocene North Atlantic sea surface temperatures
Robinson, Marci M.; Dowsett, Harry J.; Dwyer, Gary S.; Lawrence, Kira T.
2008-01-01
Multiproxy temperature estimation requires careful attention to biological, chemical, physical, temporal, and calibration differences of each proxy and paleothermometry method. We evaluated mid-Pliocene sea surface temperature (SST) estimates from multiple proxies at Deep Sea Drilling Project Holes 552A, 609B, 607, and 606, transecting the North Atlantic Drift. SST estimates derived from faunal assemblages, foraminifer Mg/Ca, and alkenone unsaturation indices showed strong agreement at Holes 552A, 607, and 606 once differences in calibration, depth, and seasonality were addressed. Abundant extinct species and/or an unrecognized productivity signal in the faunal assemblage at Hole 609B resulted in exaggerated faunal-based SST estimates but did not affect alkenone-derived or Mg/Ca–derived estimates. Multiproxy mid-Pliocene North Atlantic SST estimates corroborate previous studies documenting high-latitude mid-Pliocene warmth and refine previous faunal-based estimates affected by environmental factors other than temperature. Multiproxy investigations will aid SST estimation in high-latitude areas sensitive to climate change and currently underrepresented in SST reconstructions.
ELEMENT MASSES IN THE CRAB NEBULA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.
Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii] λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
Fincel, Mark J.; James, Daniel A.; Chipps, Steven R.; Davis, Blake A.
2014-01-01
Diet studies have traditionally been used to determine prey use and food web dynamics, while stable isotope analysis provides for a time-integrated approach to evaluate food web dynamics and characterize energy flow in aquatic systems. Direct comparison of the two techniques is rare and difficult to conduct in large, species rich systems. We compared changes in walleye Sander vitreus trophic position (TP) derived from paired diet content and stable isotope analysis. Individual diet-derived TP estimates were dissimilar to stable isotope-derived TP estimates. However, cumulative diet-derived TP estimates integrated from May 2001 to May 2002 corresponded to May 2002 isotope-derived estimates of TP. Average walleye TP estimates from the spring season appear representative of feeding throughout the entire previous year.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
ERIC Educational Resources Information Center
Bifulco, Robert
2012-01-01
The ability of nonexperimental estimators to match impact estimates derived from random assignment is examined using data from the evaluation of two interdistrict magnet schools. As in previous within-study comparisons, nonexperimental estimates differ from estimates based on random assignment when nonexperimental estimators are implemented…
Re-assessment of the mass balance of the Abbot and Getz sectors of West Antarctica
NASA Astrophysics Data System (ADS)
Chuter, S.; Bamber, J. L.
2016-12-01
Large discrepancies exist in mass balance estimates for the Getz and Abbot drainage basins, primarily due to previous poor knowledge of ice thickness at the grounding line, poor coverage by previous altimetry missions and signal leakage issues for GRACE. Large errors arise when using ice thickness measurements derived from ERS-1 and/or ICESat altimetry data due to poor track spacing, `loss of lock' issues near the grounding line and the complex morphology of these shelves, requiring fine resolution to derive robust and accurate elevations close to the grounding line. However, the advent of CryoSat-2 with its unique orbit and SARIn mode of operation has overcome these issues and enabled the determination of ice shelf thickness at a much higher accuracy than possible from previous satellites, particularly within the grounding zone. Here we present a contemporary estimate of ice sheet mass balance for the both the Getz and Abbot drainage basins. This is achieved through the use of contemporary velocity data derived from Landsat feature tracking and the use of CryoSat-2 derived ice thickness measurements. Additionally, we use this new ice thickness dataset to reassess mass balance estimates from 2008/2009, where there were large disparities between results from radar altimetry and Input-Output methodologies over the Abbot region in particular. These contemporary results are compared with other present day estimates from gravimetry and altimetry elevation changes.
U.S. Coast Guard Pollution Abatement Program : Cutter Estimated Exhaust Emissions.
DOT National Transportation Integrated Search
1975-09-01
The gaseous and particulate emissions of the Coast Guard cutter fleet are estimated by using measured emission factors and derived operational duty cycles. These data are compared to previous estimates by using emission factors found in the literatur...
NASA Astrophysics Data System (ADS)
Chen, Shimon; Bekhor, Shlomo; Yuval; Broday, David M.
2016-10-01
Most air quality models use traffic-related variables as an input. Previous studies estimated nearby vehicular activity through sporadic traffic counts or via traffic assignment models. Both methods have previously produced poor or no data for nights, weekends and holidays. Emerging technologies allow the estimation of traffic through passive monitoring of location-aware devices. Examples of such devices are GPS transceivers installed in vehicles. In this work, we studied traffic volumes that were derived from such data. Additionally, we used these data for estimating ambient nitrogen dioxide concentrations, using a non-linear optimisation model that includes basic dispersion properties. The GPS-derived data show great potential for use as a proxy for pollutant emissions from motor-vehicles.
Practical aspects of modeling aircraft dynamics from flight data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1984-01-01
The purpose of parameter estimation, a subset of system identification, is to estimate the coefficients (such as stability and control derivatives) of the aircraft differential equations of motion from sampled measured dynamic responses. In the past, the primary reason for estimating stability and control derivatives from flight tests was to make comparisons with wind tunnel estimates. As aircraft became more complex, and as flight envelopes were expanded to include flight regimes that were not well understood, new requirements for the derivative estimates evolved. For many years, the flight determined derivatives were used in simulations to aid in flight planning and in pilot training. The simulations were particularly important in research flight test programs in which an envelope expansion into new flight regimes was required. Parameter estimation techniques for estimating stability and control derivatives from flight data became more sophisticated to support the flight test programs. As knowledge of these new flight regimes increased, more complex aircraft were flown. Much of this increased complexity was in sophisticated flight control systems. The design and refinement of the control system required higher fidelity simulations than were previously required.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels
NASA Astrophysics Data System (ADS)
Fusco, Tilde; Petrella, Angelo; Tanda, Mario
2009-12-01
The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.
Manning, Andrew H.; Solomon, D. Kip
2005-01-01
The subsurface transfer of water from a mountain block to an adjacent basin (mountain block recharge (MBR)) is a commonly invoked mechanism of recharge to intermountain basins. However, MBR estimates are highly uncertain. We present an approach to characterize bulk fluid circulation in a mountain block and thus MBR that utilizes environmental tracers from the basin aquifer. Noble gas recharge temperatures, groundwater ages, and temperature data combined with heat and fluid flow modeling are used to identify clearly improbable flow regimes in the southeastern Salt Lake Valley, Utah, and adjacent Wasatch Mountains. The range of possible MBR rates is reduced by 70%. Derived MBR rates (5.5–12.6 × 104 m3 d−1) are on the same order of magnitude as previous large estimates, indicating that significant MBR to intermountain basins is plausible. However, derived rates are 50–100% of the lowest previous estimate, meaning total recharge is probably less than previously thought.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Estimation of Effect Size from a Series of Experiments Involving Paired Comparisons.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
1993-01-01
A distribution theory is derived for a G. V. Glass-type (1976) estimator of effect size from studies involving paired comparisons. The possibility of combining effect sizes from studies involving a mixture of related and unrelated samples is also explored. Resulting estimates are illustrated using data from previous psychiatric research. (SLD)
NASA Astrophysics Data System (ADS)
Tew, W. L.
2008-02-01
The sensitivities of melting temperatures to isotopic variations in monatomic and diatomic atmospheric gases using both theoretical and semi-empirical methods are estimated. The current state of knowledge of the vapor-pressure isotope effects (VPIE) and triple-point isotope effects (TPIE) is briefly summarized for the noble gases (except He), and for selected diatomic molecules including oxygen. An approximate expression is derived to estimate the relative shift in the melting temperature with isotopic substitution. In general, the magnitude of the effects diminishes with increasing molecular mass and increasing temperature. Knowledge of the VPIE, molar volumes, and heat of fusion are sufficient to estimate the temperature shift or isotopic sensitivity coefficient via the derived expression. The usefulness of this approach is demonstrated in the estimation of isotopic sensitivities and uncertainties for triple points of xenon and molecular oxygen for which few documented estimates were previously available. The calculated sensitivities from this study are considerably higher than previous estimates for Xe, and lower than other estimates in the case of oxygen. In both these cases, the predicted sensitivities are small and the resulting variations in triple point temperatures due to mass fractionation effects are less than 20 μK.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
NASA Technical Reports Server (NTRS)
Young, Andrew T.
1988-01-01
Atmospheric extinction in wideband photometry is examined both analytically and through numerical simulations. If the derivatives that appear in the Stromgren-King theory are estimated carefully, it appears that wideband measurements can be transformed to outside the atmosphere with errors no greater than a millimagnitude. A numerical analysis approach is used to estimate derivatives of both the stellar and atmospheric extinction spectra, avoiding previous assumptions that the extinction follows a power law. However, it is essential to satify the requirements of the sampling theorem to keep aliasing errors small. Typically, this means that band separations cannot exceed half of the full width at half-peak response. Further work is needed to examine higher order effects, which may well be significant.
Mach Probe Measurements in a Large-Scale Helicon Plasma
NASA Astrophysics Data System (ADS)
Hatch, M. W.; Kelly, R. F.; Fisher, D. M.; Gilmore, M.; Dwyer, R. H.
2017-10-01
A new six-tipped Mach probe, that utilizes a fused-quartz insulator, has been developed and initially tested in the HelCat dual-source plasma device at the University of New Mexico. The new design allows for relatively long duration measurements of parallel and perpendicular flows that suffer less from thermal changes in conductivity and surface build-up seen in previous alumina-insulated designs. Mach probe measurement will be presented in comparison with ongoing laser induced fluorescence (LIF) measurements, previous Mach probe measurements, ExB flow estimates derived from Langmuir probes, and fast-frame CCD camera images, in an effort to better understand previous anomalous ion flow in HelCat. Additionally, Mach probe-LIF comparisons will provide an experimentally obtained Mach probe calibration constant, K, to validate sheath-derived estimates for the weakly magnetized case. Supported by U.S. National Science Foundation Award 1500423.
Color-magnitude diagrams for six metal-rich, low-latitude globular clusters
NASA Technical Reports Server (NTRS)
Armandroff, Taft E.
1988-01-01
Colors and magnitudes for stars on CCD frames for six metal-rich, low-latitude, previously unstudied globular clusters and one well-studied, metal-rich cluster (47 Tuc) have been derived and color-magnitude diagrams have been constructed. The photometry for stars in 47 Tuc are in good agreement with previous studies, while the V magnitudes of the horizontal-branch stars in the six program clusters do not agree with estimates based on secondary methods. The distances to these clusters are different from prior estimates. Redding values are derived for each program cluster. The horizontal branches of the program clusters all appear to lie entirely redwards of the red edge of the instability strip, as is normal for their metallicities.
NASA Astrophysics Data System (ADS)
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically active radiation (PAR), calculated as the product of the fraction of absorbed PAR (fPAR) and PAR flux. The proposed algorithm performs well when evaluated against flux tower GPP (R2=0.79, RMSE= 1.93 µmol m2 s-1). Here we use GPP and T estimates previously derived at the same 16 Fluxnet sites to analyse WUE. Satellite-derived WUE explained variation in (long-term average) WUE among plant functional types but evergreen needleleaf had higher WUE than predicted. The benefit of our approach is that it uses mutually consistent estimates of GPP and T to derive canopy-level WUE without any land cover classification artefacts. References Baldocchi, D. (2008). Turner Review No. 15: 'Breathing' of the terrestrial biosphere: lessons learned from a global network of carbon dioxide flux measurement systems. Australian Journal of Botany, 56, 26 Kelliher, F.M., Leuning, R., Raupach, M.R., & Schulze, E.D. (1995). Maximum conductances for evaporation from global vegetation types. Agricultural and Forest Meteorology, 73, 1-16 Yebra, M., Van Dijk, A., Leuning, R., Huete, A., & Guerschman, J.P. (2013). Evaluation of optical remote sensing to estimate actual evapotranspiration and canopy conductance. Remote Sensing of Environment, 129, 250-261
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
Eastern Asian Emissions of Anthropogenic Halocarbons Deduced from Aircraft Concentration Data
NASA Technical Reports Server (NTRS)
Palmer, Paul I.; Jacob, Daniel J.; Mickle, Loretta, J.; Blake, Donald R.; Sachse, Glen W.; Fuelberg, Henry E.; Kiley, Christopher M.
2003-01-01
The Montreal Protocol restricts production of ozone-depleting halocarbons worldwide. Enforcement of the protocol has relied mainly on annual government statistics of production and consumption of these compounds (bottom-up approach). We show here that aircraft observations of ha1ocarbon:CO enhancement ratios on regional to continental scales can be used to infer halocarbon emissions, providing independent verification of the bottom-up approach. We apply this topdown approach to aircraft observations of Asian outflow &om the TRACE-P mission over the western Pacific (March-April 2001) and derive emissions from eastern Asia (China, Japan, and Korea). We derive an eastern Asian carbon tetrachloride (CCl4) source of 21.5 Gg yr(sup -1), several-fold larger than previous estimates and amounting to -30% of the global budget for this gas. Our emission estimate for CFC-11 from eastern Asia is 50% higher than inventories derived from manufacturing records. Our emission estimates for methyl chloroform (CH3CC13) and CFC-12 are in agreement with existing inventories. For halon 1211 we find only a strong local source originating from the Shanghai area. Our emission estimates for the above gases result in a approximately equal to 40% increase in the ozone depletion potential (ODP) of Asian emissions relative to previous estimates, corresponding to a approximately equal to 10% global increase in ODP.
NASA Astrophysics Data System (ADS)
Kazeykina, Anna; Muñoz, Claudio
2018-04-01
We continue our study on the Cauchy problem for the two-dimensional Novikov-Veselov (NV) equation, integrable via the inverse scattering transform for the two dimensional Schrödinger operator at a fixed energy parameter. This work is concerned with the more involved case of a positive energy parameter. For the solution of the linearized equation we derive smoothing and Strichartz estimates by combining new estimates for two different frequency regimes, extending our previous results for the negative energy case [18]. The low frequency regime, which our previous result was not able to treat, is studied in detail. At non-low frequencies we also derive improved smoothing estimates with gain of almost one derivative. Then we combine the linear estimates with a Fourier decomposition method and Xs,b spaces to obtain local well-posedness of NV at positive energy in Hs, s > 1/2. Our result implies, in particular, that at least for s > 1/2, NV does not change its behavior from semilinear to quasilinear as energy changes sign, in contrast to the closely related Kadomtsev-Petviashvili equations. As a complement to our LWP results, we also provide some new explicit solutions of NV at zero energy, generalizations of the lumps solutions, which exhibit new and nonstandard long time behavior. In particular, these solutions blow up in infinite time in L2.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Reassessment of the mass balance of the Abbot and Getz sectors of West Antarctica
NASA Astrophysics Data System (ADS)
Chuter, Stephen; Martín-Español, Alba; Wouters, Bert; Bamber, Jonathan
2017-04-01
Large discrepancies exist in mass balance estimates for the Getz and Abbot drainage basins, primarily due to previous poor knowledge of ice thickness at the grounding line, poor coverage by previous altimetry missions and signal leakage issues for GRACE. This is particularly the case for the Abbot region, where previously there have been contrasting positive ice sheet basin elevation rates from altimetry and negative mass budget estimates. Large errors arise when using ice thickness measurements derived from ERS-1 and/or ICESat altimetry data due to poor track spacing, 'loss of lock' issues near the grounding line and the complex morphology of these shelves, requiring fine resolution to derive robust and accurate elevations close to the grounding line. This was exemplified with the manual adjustments of up to 100 m required at the grounding line during the creation of Bedmap2. However, the advent of CryoSat-2 with its unique orbit and SARIn mode of operation has overcome these issues and enabled the determination of ice shelf thickness at a much higher accuracy than possible from previous satellites, particularly within the grounding zone. We present a reassessment of mass balance estimates for the 2007-2009 epoch using improved CryoSat-2 ice thicknesses. We find that CryoSat-2 ice thickness estimates are systematically thinner by 30% and 16.5% for the Abbot and Getz sectors respectively. Our new mass balance estimate of 8 ± 6 Gt yr-1for the Abbot region resolves the previous discrepancy with altimetry. Over the Getz region, the new mass balance estimate of 7.56 ± 16.6 Gt yr-1is in better agreement with other geodetic techniques. We also find there has been an increase in grounding line velocity of up to 20% since the 2007-2009 epoch, coupled with mean ice sheet thinning rates of -0.67 ± 0.13 m yr-1 derived from CryoSat-2 in fast flow regions. This is in addition to mean snowfall trends of -0.33 m yr-1w.e. since 2006. This suggests the onset of a dynamic instability in the region and the possibility of grounding line retreat, driven by both surface processes and ice dynamics.
California Drought Recovery Assessment Using GRACE Satellite Gravimetry Information
NASA Astrophysics Data System (ADS)
Love, C. A.; Aghakouchak, A.; Madadgar, S.; Tourian, M. J.
2015-12-01
California has been experiencing its most extreme drought in recent history due to a combination of record high temperatures and exceptionally low precipitation. An estimate for when the drought can be expected to end is needed for risk mitigation and water management. A crucial component of drought recovery assessments is the estimation of terrestrial water storage (TWS) deficit. Previous studies on drought recovery have been limited to surface water hydrology (precipitation and/or runoff) for estimating changes in TWS, neglecting the contribution of groundwater deficits to the recovery time of the system. Groundwater requires more time to recover than surface water storage; therefore, the inclusion of groundwater storage in drought recovery assessments is essential for understanding the long-term vulnerability of a region. Here we assess the probability, for varying timescales, of California's current TWS deficit returning to its long-term historical mean. Our method consists of deriving the region's fluctuations in TWS from changes in the gravity field observed by NASA's Gravity Recovery and Climate Experiment (GRACE) satellites. We estimate the probability that meteorological inputs, precipitation minus evaporation and runoff, over different timespans will balance the current GRACE-derived TWS deficit (e.g. in 3, 6, 12 months). This method improves upon previous techniques as the GRACE-derived water deficit comprises all hydrologic sources, including surface water, groundwater, and snow cover. With this empirical probability assessment we expect to improve current estimates of California's drought recovery time, thereby improving risk mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2009-09-15
Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less
Antoniou, A; Pharoah, P; Narod, S; Risch, H; Eyfjord, J; Hopper, J; Olsson, H; Johannsson, O; Borg, A; Pasini, B; Radice, P; Manoukian, S; Eccles, D; Tang, N; Olah, E; Anton-Culver, H; Warner, E; Lubinski, J; Gronwald, J; Gorski, B; Tulinius, H; Thorlacius, S; Eerola, H; Nevanlinna, H; Syrjakoski, K; Kallioniemi, O; Thompson, D; Evans, C; Peto, J; Lalloo, F; Evans, D; Easton, D
2005-01-01
A recent report estimated the breast cancer risks in carriers of the three Ashkenazi founder mutations to be higher than previously published estimates derived from population based studies. In an attempt to confirm this, the breast and ovarian cancer risks associated with the three Ashkenazi founder mutations were estimated using families included in a previous meta-analysis of populatrion based studies. The estimated breast cancer risks for each of the founder BRCA1 and BRCA2 mutations were similar to the corresponding estimates based on all BRCA1 or BRCA2 mutations in the meta-analysis. These estimates appear to be consistent with the observed prevalence of the mutations in the Ashkenazi Jewish population. PMID:15994883
Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F
2014-10-01
No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.
Precise Absolute Astrometry from the VLBA Imaging and Polarimetry Survey at 5 GHz
NASA Technical Reports Server (NTRS)
Petrov, L.; Taylor, G. B.
2011-01-01
We present accurate positions for 857 sources derived from the astrometric analysis of 16 eleven-hour experiments from the Very Long Baseline Array imaging and polarimetry survey at 5 GHz (VIPS). Among the observed sources, positions of 430 objects were not previously determined at milliarcsecond-level accuracy. For 95% of the sources the uncertainty of their positions ranges from 0.3 to 0.9 mas, with a median value of 0.5 mas. This estimate of accuracy is substantiated by the comparison of positions of 386 sources that were previously observed in astrometric programs simultaneously at 2.3/8.6 GHz. Surprisingly, the ionosphere contribution to group delay was adequately modeled with the use of the total electron content maps derived from GPS observations and only marginally affected estimates of source coordinates.
Model-assisted forest yield estimation with light detection and ranging
Jacob L. Strunk; Stephen E. Reutebuch; Hans-Erik Andersen; Peter J. Gould; Robert J. McGaughey
2012-01-01
Previous studies have demonstrated that light detection and ranging (LiDAR)-derived variables can be used to model forest yield variables, such as biomass, volume, and number of stems. However, the next step is underrepresented in the literature: estimation of forest yield with appropriate confidence intervals. It is of great importance that the procedures required for...
Observational Constraints on the Water Vapor Feedback Using GPS Radio Occultations
NASA Astrophysics Data System (ADS)
Vergados, P.; Mannucci, A. J.; Ao, C. O.; Fetzer, E. J.
2016-12-01
The air refractive index at L-band frequencies depends on the air's density and water vapor content. Exploiting these relationships, we derive a theoretical model to infer the specific humidity response to surface temperature variations, dq/dTs, given knowledge of how the air refractive index and temperature vary with surface temperature. We validate this model using 1.2-1.6 GHz Global Positioning System Radio Occultation (GPS RO) observations from 2007 to 2010 at 250 hPa, where the water vapor feedback on surface warming is strongest. Current research indicates that GPS RO data sets can capture the amount of water vapor in very dry and very moist air more efficiently than other observing platforms, possibly suggesting larger water vapor feedback than previously known. Inter-comparing the dq/dTs among different data sets will provide us with additional constraints on the water vapor feedback. The dq/dTs estimation from GPS RO observations shows excellent agreement with previously published results and the responses estimated using Atmospheric Infrared Sounder (AIRS) and NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data sets. In particular, the GPS RO-derived dq/dTs is larger by 6% than that estimated using the AIRS data set. This agrees with past evidence that AIRS may be dry-biased in the upper troposphere. Compared to the MERRA estimations, the GPS RO-derived dq/dTs is 10% smaller, also agreeing with previous results that show that MERRA may have a wet bias in the upper troposphere. Because of their high sensitivity to fractional changes in water vapor, and their inherent long-term accuracy, current and future GPS RO observations show great promise in monitoring climate feedbacks and their trends.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
New, national bottom-up estimate for tree-based biological ...
Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating
Fundamental Properties of Co-moving Stars Observed by Gaia
NASA Astrophysics Data System (ADS)
Bochanski, John J.; Faherty, Jacqueline K.; Gagné, Jonathan; Nelson, Olivia; Coker, Kristina; Smithka, Iliya; Desir, Deion; Vasquez, Chelsea
2018-04-01
We have estimated fundamental parameters for a sample of co-moving stars observed by Gaia and identified by Oh et al. We matched the Gaia observations to the 2MASS and Wide-Field Infrared Survey Explorer catalogs and fit MIST isochrones to the data, deriving estimates of the mass, radius, [Fe/H], age, distance, and extinction to 9754 stars in the original sample of 10606 stars. We verify these estimates by comparing our new results to previous analyses of nearby stars, examining fiducial cluster properties, and estimating the power-law slope of the local present-day mass function. A comparison to previous studies suggests that our mass estimates are robust, while metallicity and age estimates are increasingly uncertain. We use our calculated masses to examine the properties of binaries in the sample and show that separation of the pairs dominates the observed binding energies and expected lifetimes.
A Microwave Technique for Mapping Ice Temperature in the Arctic Seasonal Sea Ice Zone
NASA Technical Reports Server (NTRS)
St.Germain, Karen M.; Cavalieri, Donald J.
1997-01-01
A technique for deriving ice temperature in the Arctic seasonal sea ice zone from passive microwave radiances has been developed. The algorithm operates on brightness temperatures derived from the Special Sensor Microwave/Imager (SSM/I) and uses ice concentration and type from a previously developed thin ice algorithm to estimate the surface emissivity. Comparisons of the microwave derived temperatures with estimates derived from infrared imagery of the Bering Strait yield a correlation coefficient of 0.93 and an RMS difference of 2.1 K when coastal and cloud contaminated pixels are removed. SSM/I temperatures were also compared with a time series of air temperature observations from Gambell on St. Lawrence Island and from Point Barrow, AK weather stations. These comparisons indicate that the relationship between the air temperature and the ice temperature depends on ice type.
Forecasting outbreaks of the Douglas-fir tussock moth from lower crown cocoon samples.
Richard R. Mason; Donald W. Scott; H. Gene Paul
1993-01-01
A predictive technique using a simple linear regression was developed to forecast the midcrown density of small tussock moth larvae from estimates of cocoon density in the previous generation. The regression estimator was derived from field samples of cocoons and larvae taken from a wide range of nonoutbreak tussock moth populations. The accuracy of the predictions was...
Regional ground-water evapotranspiration and ground-water budgets, Great Basin, Nevada
Nichols, William D.
2000-01-01
PART A: Ground-water evapotranspiration data from five sites in Nevada and seven sites in Owens Valley, California, were used to develop equations for estimating ground-water evapotranspiration as a function of phreatophyte plant cover or as a function of the depth to ground water. Equations are given for estimating mean daily seasonal and annual ground-water evapotranspiration. The equations that estimate ground-water evapotranspiration as a function of plant cover can be used to estimate regional-scale ground-water evapotranspiration using vegetation indices derived from satellite data for areas where the depth to ground water is poorly known. Equations that estimate ground-water evapotranspiration as a function of the depth to ground water can be used where the depth to ground water is known, but for which information on plant cover is lacking. PART B: Previous ground-water studies estimated groundwater evapotranspiration by phreatophytes and bare soil in Nevada on the basis of results of field studies published in 1912 and 1932. More recent studies of evapotranspiration by rangeland phreatophytes, using micrometeorological methods as discussed in Chapter A of this report, provide new data on which to base estimates of ground-water evapotranspiration. An approach correlating ground-water evapotranspiration with plant cover is used in conjunction with a modified soil-adjusted vegetation index derived from Landsat data to develop a method for estimating the magnitude and distribution of ground-water evapotranspiration at a regional scale. Large areas of phreatophytes near Duckwater and Lockes in Railroad Valley are believed to subsist on ground water discharged from nearby regional springs. Ground-water evapotranspiration by the Duckwater phreatophytes of about 11,500 acre-feet estimated by the method described in this report compares well with measured discharge of about 13,500 acre-feet from the springs near Duckwater. Measured discharge from springs near Lockes was about 2,400 acre-feet; estimated ground-water evapotranspiration using the proposed method was about 2,450 acre-feet. PART C: Previous estimates of ground-water budgets in Nevada were based on methods and data that now are more than 60 years old. Newer methods, data, and technologies were used in the present study to estimate ground-water recharge from precipitation and ground-water discharge by evapotranspiration by phreatophytes for 16 contiguous valleys in eastern Nevada. Annual ground-water recharge to these valleys was estimated to be about 855,000 acre-feet and annual ground-water evapotranspiration was estimated to be about 790,000 acrefeet; both are a little more than two times greater than previous estimates. The imbalance of recharge over evapotranspiration represents recharge that either (1) leaves the area as interbasin flow or (2) is derived from precipitation that falls on terrain within the topographic boundary of the study area but contributes to discharge from hydrologic systems that lie outside these topographic limits. A vegetation index derived from Landsat-satellite data was used to estimate phreatophyte plant cover on the floors of the 16 valleys. The estimated phreatophyte plant cover then was used to estimate annual ground-water evapotranspiration. Detailed estimates of summer, winter, and annual ground-water evapotranspiration for areas with different ranges of phreatophyte plant cover were prepared for each valley. The estimated ground-water discharge from 15 valleys, combined with independent estimates of interbasin ground-water flow into or from a valley, were used to calculate the percentage of recharge derived from precipitation within the topographic boundary of each valley. These percentages then were used to estimate ground-water recharge from precipitation within each valley. Ground-water budgets for all 16 valleys were based on the estimated recharge from precipitation and estimated evapotranspiration. Any imba
Performance of polygenic scores for predicting phobic anxiety.
Walter, Stefan; Glymour, M Maria; Koenen, Karestan; Liang, Liming; Tchetgen Tchetgen, Eric J; Cornelis, Marilyn; Chang, Shun-Chiao; Rimm, Eric; Kawachi, Ichiro; Kubzansky, Laura D
2013-01-01
Anxiety disorders are common, with a lifetime prevalence of 20% in the U.S., and are responsible for substantial burdens of disability, missed work days and health care utilization. To date, no causal genetic variants have been identified for anxiety, anxiety disorders, or related traits. To investigate whether a phobic anxiety symptom score was associated with 3 alternative polygenic risk scores, derived from external genome-wide association studies of anxiety, an internally estimated agnostic polygenic score, or previously identified candidate genes. Longitudinal follow-up study. Using linear and logistic regression we investigated whether phobic anxiety was associated with polygenic risk scores derived from internal, leave-one out genome-wide association studies, from 31 candidate genes, and from out-of-sample genome-wide association weights previously shown to predict depression and anxiety in another cohort. Study participants (n = 11,127) were individuals from the Nurses' Health Study and Health Professionals Follow-up Study. Anxiety symptoms were assessed via the 8-item phobic anxiety scale of the Crown Crisp Index at two time points, from which a continuous phenotype score was derived. We found no genome-wide significant associations with phobic anxiety. Phobic anxiety was also not associated with a polygenic risk score derived from the genome-wide association study beta weights using liberal p-value thresholds; with a previously published genome-wide polygenic score; or with a candidate gene risk score based on 31 genes previously hypothesized to predict anxiety. There is a substantial gap between twin-study heritability estimates of anxiety disorders ranging between 20-40% and heritability explained by genome-wide association results. New approaches such as improved genome imputations, application of gene expression and biological pathways information, and incorporating social or environmental modifiers of genetic risks may be necessary to identify significant genetic predictors of anxiety.
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Determination of the stability and control derivatives of the NASA F/A-18 HARV using flight data
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.; Spagnuolo, Joelle M.
1993-01-01
This report documents the research conducted for the NASA-Ames Cooperative Agreement No. NCC 2-759 with West Virginia University. A complete set of the stability and control derivatives for varying angles of attack from 10 deg to 60 deg were estimated from flight data of the NASA F/A-18 HARV. The data were analyzed with the use of the pEst software which implements the output-error method of parameter estimation. Discussions of the aircraft equations of motion, parameter estimation process, design of flight test maneuvers, and formulation of the mathematical model are presented. The added effects of the thrust vectoring and single surface excitation systems are also addressed. The results of the longitudinal and lateral directional derivative estimates at varying angles of attack are presented and compared to results from previous analyses. The results indicate a significant improvement due to the independent control surface deflections induced by the single surface excitation system, and at the same time, a need for additional flight data especially at higher angles of attack.
Latitudinal distributions of particulate carbon export across the North Western Atlantic Ocean
NASA Astrophysics Data System (ADS)
Puigcorbé, Viena; Roca-Martí, Montserrat; Masqué, Pere; Benitez-Nelson, Claudia; Rutgers van der Loeff, Michiel; Bracher, Astrid; Moreau, Sebastien
2017-11-01
234Th-derived carbon export fluxes were measured in the Atlantic Ocean under the GEOTRACES framework to evaluate basin-scale export variability. Here, we present the results from the northern half of the GA02 transect, spanning from the equator to 64°N. As a result of limited site-specific C/234Th ratio measurements, we further combined our data with previous work to develop a basin wide C/234Th ratio depth curve. While the magnitude of organic carbon fluxes varied depending on the C/234Th ratio used, latitudinal trends were similar, with sizeable and variable organic carbon export fluxes occurring at high latitudes and low to negligible fluxes occurring in oligotrophic waters. Our results agree with previous studies, except at the boundaries between domains, where fluxes were relatively enhanced. Three different models were used to obtain satellite-derived net primary production (NPP). In general, NPP estimates had similar trends along the transect, but there were significant differences in the absolute magnitude depending on the model used. Nevertheless, organic carbon export efficiencies were generally < 25%, with the exception of a few stations located in the transition area between the riverine and the oligotrophic domains and between the oligotrophic and the temperate domains. Satellite-derived organic carbon export models from Dunne et al. (2005) (D05), Laws et al. (2011) (L11) and Henson et al. (2011) (H11) were also compared to our 234Th-derived carbon exports fluxes. D05 and L11 provided estimates closest to values obtained with the 234Th approach (within a 3-fold difference), but with no clear trends. The H11 model, on the other hand, consistently provided lower export estimates. The large increase in export data in the Atlantic Ocean derived from the GEOTRACES Program, combined with satellite observations and modeling efforts continue to improve the estimates of carbon export in this ocean basin and therefore reduce uncertainty in the global carbon budget. However, our results also suggest that tuning export models and including biological parameters at a regional scale is necessary for improving satellite-modeling efforts and providing export estimates that are more representative of in situ observations.
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography
NASA Technical Reports Server (NTRS)
Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
An optimal pole-matching observer design for estimating tyre-road friction force
NASA Astrophysics Data System (ADS)
Faraji, Mohammad; Johari Majd, Vahid; Saghafi, Behrooz; Sojoodi, Mahdi
2010-10-01
In this paper, considering the dynamical model of tyre-road contacts, we design a nonlinear observer for the on-line estimation of tyre-road friction force using the average lumped LuGre model without any simplification. The design is the extension of a previously offered observer to allow a muchmore realistic estimation by considering the effect of the rolling resistance and a term related to the relative velocity in the observer. Our aim is not to introduce a new friction model, but to present a more accurate nonlinear observer for the assumed model. We derive linear matrix equality conditions to obtain an observer gain with minimum pole mismatch for the desired observer error dynamic system. We prove the convergence of the observer for the non-simplified model. Finally, we compare the performance of the proposed observer with that of the previously mentioned nonlinear observer, which shows significant improvement in the accuracy of estimation.
Non-destructive evaluation of composite materials using ultrasound
NASA Technical Reports Server (NTRS)
Miller, J. G.
1984-01-01
Investigation of the nondestructive evaluation of advanced composite-laminates is summarized. Indices derived from the measurement of fundamental acoustic parameters are used in order to quantitatively estimate the local material properties of the laminate. The following sections describe ongoing studies of phase insensitive attenuation measurements, and discuss several phenomena which influences the previously reported technique of polar backscatter. A simple and effective programmable gate circuit designed for use in estimating attenuation from backscatter is described.
Tank waste remediation system baseline tank waste inventory estimates for fiscal year 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shelton, L.W., Westinghouse Hanford
1996-12-06
A set of tank-by-tank waste inventories is derived from historical waste models, flowsheet records, and analytical data to support the Tank Waste Remediation System flowsheet and retrieval sequence studies. Enabling assumptions and methodologies used to develop the inventories are discussed. These provisional inventories conform to previously established baseline inventories and are meant to serve as an interim basis until standardized inventory estimates are made available.
Chao, Anne; Chiu, Chun-Huo; Colwell, Robert K; Magnago, Luiz Fernando S; Chazdon, Robin L; Gotelli, Nicholas J
2017-11-01
Estimating the species, phylogenetic, and functional diversity of a community is challenging because rare species are often undetected, even with intensive sampling. The Good-Turing frequency formula, originally developed for cryptography, estimates in an ecological context the true frequencies of rare species in a single assemblage based on an incomplete sample of individuals. Until now, this formula has never been used to estimate undetected species, phylogenetic, and functional diversity. Here, we first generalize the Good-Turing formula to incomplete sampling of two assemblages. The original formula and its two-assemblage generalization provide a novel and unified approach to notation, terminology, and estimation of undetected biological diversity. For species richness, the Good-Turing framework offers an intuitive way to derive the non-parametric estimators of the undetected species richness in a single assemblage, and of the undetected species shared between two assemblages. For phylogenetic diversity, the unified approach leads to an estimator of the undetected Faith's phylogenetic diversity (PD, the total length of undetected branches of a phylogenetic tree connecting all species), as well as a new estimator of undetected PD shared between two phylogenetic trees. For functional diversity based on species traits, the unified approach yields a new estimator of undetected Walker et al.'s functional attribute diversity (FAD, the total species-pairwise functional distance) in a single assemblage, as well as a new estimator of undetected FAD shared between two assemblages. Although some of the resulting estimators have been previously published (but derived with traditional mathematical inequalities), all taxonomic, phylogenetic, and functional diversity estimators are now derived under the same framework. All the derived estimators are theoretically lower bounds of the corresponding undetected diversities; our approach reveals the sufficient conditions under which the estimators are nearly unbiased, thus offering new insights. Simulation results are reported to numerically verify the performance of the derived estimators. We illustrate all estimators and assess their sampling uncertainty with an empirical dataset for Brazilian rain forest trees. These estimators should be widely applicable to many current problems in ecology, such as the effects of climate change on spatial and temporal beta diversity and the contribution of trait diversity to ecosystem multi-functionality. © 2017 by the Ecological Society of America.
Estimating soil matric potential in Owens Valley, California
Sorenson, Stephen K.; Miller, Reuben F.; Welch, Michael R.; Groeneveld, David P.; Branson, Farrel A.
1989-01-01
Much of the floor of Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first, the filter-paper method, uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The previously published calibration relations used to estimate soil matric potential from the water content of the filter papers were modified on the basis of current laboratory data. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base-10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. The slope and intercepts of this function vary with the texture and saturation capacity of the soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1-m depth intervals derived by using the hand auger and filter-paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter-paper method could be obtained 90 to 95 percent of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures.
Miller, Matthew P.; Johnson, Henry M.; Susong, David D.; Wolock, David M.
2015-01-01
Understanding how watershed characteristics and climate influence the baseflow component of stream discharge is a topic of interest to both the scientific and water management communities. Therefore, the development of baseflow estimation methods is a topic of active research. Previous studies have demonstrated that graphical hydrograph separation (GHS) and conductivity mass balance (CMB) methods can be applied to stream discharge data to estimate daily baseflow. While CMB is generally considered to be a more objective approach than GHS, its application across broad spatial scales is limited by a lack of high frequency specific conductance (SC) data. We propose a new method that uses discrete SC data, which are widely available, to estimate baseflow at a daily time step using the CMB method. The proposed approach involves the development of regression models that relate discrete SC concentrations to stream discharge and time. Regression-derived CMB baseflow estimates were more similar to baseflow estimates obtained using a CMB approach with measured high frequency SC data than were the GHS baseflow estimates at twelve snowmelt dominated streams and rivers. There was a near perfect fit between the regression-derived and measured CMB baseflow estimates at sites where the regression models were able to accurately predict daily SC concentrations. We propose that the regression-derived approach could be applied to estimate baseflow at large numbers of sites, thereby enabling future investigations of watershed and climatic characteristics that influence the baseflow component of stream discharge across large spatial scales.
Performance of Polygenic Scores for Predicting Phobic Anxiety
Walter, Stefan; Glymour, M. Maria; Koenen, Karestan; Liang, Liming; Tchetgen Tchetgen, Eric J.; Cornelis, Marilyn; Chang, Shun-Chiao; Rimm, Eric; Kawachi, Ichiro; Kubzansky, Laura D.
2013-01-01
Context Anxiety disorders are common, with a lifetime prevalence of 20% in the U.S., and are responsible for substantial burdens of disability, missed work days and health care utilization. To date, no causal genetic variants have been identified for anxiety, anxiety disorders, or related traits. Objective To investigate whether a phobic anxiety symptom score was associated with 3 alternative polygenic risk scores, derived from external genome-wide association studies of anxiety, an internally estimated agnostic polygenic score, or previously identified candidate genes. Design Longitudinal follow-up study. Using linear and logistic regression we investigated whether phobic anxiety was associated with polygenic risk scores derived from internal, leave-one out genome-wide association studies, from 31 candidate genes, and from out-of-sample genome-wide association weights previously shown to predict depression and anxiety in another cohort. Setting and Participants Study participants (n = 11,127) were individuals from the Nurses' Health Study and Health Professionals Follow-up Study. Main Outcome Measure Anxiety symptoms were assessed via the 8-item phobic anxiety scale of the Crown Crisp Index at two time points, from which a continuous phenotype score was derived. Results We found no genome-wide significant associations with phobic anxiety. Phobic anxiety was also not associated with a polygenic risk score derived from the genome-wide association study beta weights using liberal p-value thresholds; with a previously published genome-wide polygenic score; or with a candidate gene risk score based on 31 genes previously hypothesized to predict anxiety. Conclusion There is a substantial gap between twin-study heritability estimates of anxiety disorders ranging between 20–40% and heritability explained by genome-wide association results. New approaches such as improved genome imputations, application of gene expression and biological pathways information, and incorporating social or environmental modifiers of genetic risks may be necessary to identify significant genetic predictors of anxiety. PMID:24278274
USDA-ARS?s Scientific Manuscript database
Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...
The confounding effect of understory vegetation contributions to satellite derived
estimates of leaf area index (LAI) was investigated on two loblolly pine (Pinus taeda) forest stands located in the southeastern United States. Previous studies have shown that understory can a...
Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.
2009-01-01
The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…
Unstable solitary-wave solutions of the generalized Benjamin-Bona-Mahony equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, W.R.; Restrepo, J.M.; Bona, J.L.
1994-06-01
The evolution of solitary waves of the gBBM equation is investigated computationally. The experiments confirm previously derived theoretical stability estimates and, more importantly, yield insights into their behavior. For example, highly energetic unstable solitary waves when perturbed are shown to evolve into several stable solitary waves.
The confounding effect of understory vegetation contributions to satellite derived estimates of leaf area index (LAI) was investigated on two loblolly pine forest stands located in the southeastern United States. Previous studies have shown that understory can account from 0-40%...
Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C
2014-06-01
We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka
2017-07-01
This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.
Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.
2015-01-01
We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341
NASA Astrophysics Data System (ADS)
Wetmore, P. H.; Xie, S.; Gallant, E.; Owen, L. A.; Dixon, T. H.
2017-12-01
Fault slip rate is fundamental to accurate seismic hazard assessment. In the Mojave Desert section of the Eastern California Shear Zone previous studies have suggested a discrepancy between short-term geodetic and long-term geologic slip rate estimates. Understanding the origin of this discrepancy could lead to better understanding of stress evolution, and improve earthquake hazard estimates in general. We measured offsets in alluvial fans along the Calico fault near Newberry Springs, California, and used exposure age dating based on the cosmogenic nuclide 10Be to date the offset landforms. We derive a mean slip rate of 3.6 mm/yr, representing an average over the last few hundred thousand years, significantly faster than previous estimates. Considering numerous faults in the Mojave Desert and limited geologic slip rate estimates, it is premature to claim a geologic versus geodetic "discrepancy" for the ECSZ. More slip rate data, from all faults with the ECSZ, are needed to provide a statistically meaningful assessment of the geologic rates for each of the faults comprising the ECSZ.
Estimating the theoretical semivariogram from finite numbers of measurements
Zheng, Li; Silliman, Stephen E.
2000-01-01
We investigate from a theoretical basis the impacts of the number, location, and correlation among measurement points on the quality of an estimate of the semivariogram. The unbiased nature of the semivariogram estimator ŷ(r) is first established for a general random process Z(x). The variance of ŷZ(r) is then derived as a function of the sampling parameters (the number of measurements and their locations). In applying this function to the case of estimating the semivariograms of the transmissivity and the hydraulic head field, it is shown that the estimation error depends on the number of the data pairs, the correlation among the data pairs (which, in turn, are determined by the form of the underlying semivariogram γ(r)), the relative locations of the data pairs, and the separation distance at which the semivariogram is to be estimated. Thus design of an optimal sampling program for semivariogram estimation should include consideration of each of these factors. Further, the function derived for the variance of ŷZ(r) is useful in determining the reliability of a semivariogram developed from a previously established sampling design.
NASA Astrophysics Data System (ADS)
Hagedorn, Benjamin
2015-04-01
Geochemical data deduced from groundwater and vein calcite were used to quantify groundwater recharge and interbasin flow rates in the Tule Desert (southeastern Nevada). 14C age gradients below the water table suggest recharge rates of 1-2 mm/yr which correspond to a sustainable yield of 5 × 10-4 km3/yr to 1 × 10-3 km3/yr. Uncertainties in the applied effective porosity value and increasing horizontal interbasin flow components at greater depths may bias these estimates low compared to those previously reported using the water budget method. The deviation of the groundwater δ18O time-series pattern for the Pleistocene-Holocene transition from that of the Devils Hole vein calcite (which is considered a proxy for local climate change) allows interbasin flow rates of northerly derived groundwater to be estimated. The constrained rates (75.0-120 m/yr) are slightly higher than those previously calculated using Darcy's Law, but translate into hydraulic conductivity values strikingly similar to those obtained from pump tests. Data further indicate that production wells located closer to the western mountainous margin will be producing mainly from locally derived mountain-system recharge whereas wells located closer to the eastern margin are more influenced by older, regionally derived carbonate groundwater.
2007-03-01
32 4.4 Algorithm Pseudo - Code ...................................................................................34 4.5 WIND Interface With a...difference estimates of xc temporal derivatives, or by using a polynomial fit to the previous values of xc. 34 4.4 ALGORITHM PSEUDO - CODE Pseudo ...Phase Shift Keying DQPSK Differential Quadrature Phase Shift Keying EVM Error Vector Magnitude FFT Fast Fourier Transform FPGA Field Programmable
NASA Astrophysics Data System (ADS)
Qiu, Xin; Cheng, Irene; Yang, Fuquan; Horb, Erin; Zhang, Leiming; Harner, Tom
2018-03-01
Two speciated and spatially resolved emissions databases for polycyclic aromatic compounds (PACs) in the Athabasca oil sands region (AOSR) were developed. The first database was derived from volatile organic compound (VOC) emissions data provided by the Cumulative Environmental Management Association (CEMA) and the second database was derived from additional data collected within the Joint Canada-Alberta Oil Sands Monitoring (JOSM) program. CALPUFF modelling results for atmospheric polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs, and dibenzothiophenes (DBTs), obtained using each of the emissions databases, are presented and compared with measurements from a passive air monitoring network. The JOSM-derived emissions resulted in better model-measurement agreement in the total PAH concentrations and for most PAH species concentrations compared to results using CEMA-derived emissions. At local sites near oil sands mines, the percent error of the model compared to observations decreased from 30 % using the CEMA-derived emissions to 17 % using the JOSM-derived emissions. The improvement at local sites was likely attributed to the inclusion of updated tailings pond emissions estimated from JOSM activities. In either the CEMA-derived or JOSM-derived emissions scenario, the model underestimated PAH concentrations by a factor of 3 at remote locations. Potential reasons for the disagreement include forest fire emissions, re-emissions of previously deposited PAHs, and long-range transport not considered in the model. Alkylated PAH and DBT concentrations were also significantly underestimated. The CALPUFF model is expected to predict higher concentrations because of the limited chemistry and deposition modelling. Thus the model underestimation of PACs is likely due to gaps in the emissions database for these compounds and uncertainties in the methodology for estimating the emissions. Future work is required that focuses on improving the PAC emissions estimation and speciation methodologies and reducing the uncertainties in VOC emissions which are subsequently used in PAC emissions estimation.
On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal
NASA Astrophysics Data System (ADS)
Fortunelli, Alessandro; Painelli, Anna
1997-05-01
A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.
NASA Technical Reports Server (NTRS)
Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.
2017-01-01
We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.
The global magnitude-frequency relationship for large explosive volcanic eruptions
NASA Astrophysics Data System (ADS)
Rougier, Jonathan; Sparks, R. Stephen J.; Cashman, Katharine V.; Brown, Sarah K.
2018-01-01
For volcanoes, as for other natural hazards, the frequency of large events diminishes with their magnitude, as captured by the magnitude-frequency relationship. Assessing this relationship is valuable both for the insights it provides about volcanism, and for the practical challenge of risk management. We derive a global magnitude-frequency relationship for explosive volcanic eruptions of at least 300Mt of erupted mass (or M4.5). Our approach is essentially empirical, based on the eruptions recorded in the LaMEVE database. It differs from previous approaches mainly in our conservative treatment of magnitude-rounding and under-recording. Our estimate for the return period of 'super-eruptions' (1000Gt, or M8) is 17ka (95% CI: 5.2ka, 48ka), which is substantially shorter than previous estimates, indicating that volcanoes pose a larger risk to human civilisation than previously thought.
Progress in navigation filter estimate fusion and its application to spacecraft rendezvous
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1994-01-01
A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.
A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.
Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S
2018-06-06
Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.
Complex formation of vanadium(V) with resorcylalhydrazides of carboxylic acids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudarev, V.I.; Dolgorev, V.A.; Volkov, A.N.
1986-08-01
In this work, a previous investigation of hydrazine derivatives as analytical reagents for vanadium(V) was continued. The authors studied arylalhydrazones -- derivatives of resorcylalhydrazides of anisic (RHASA), anthranilic (RHANA), and benzoic (RHBA) acids. The reagents presented differ from those studied previously by the presence of a second hydroxy group in the para-position of the benzene ring -the resorcinol fragment -- and substituents in the benzoin fragment. Such changes made it possible to increase the solubility of the reagents in aqueous medium and to estimate the change in the main spectrophotometric parameters of the analytical reaction. A rapid method was developedmore » for the determination of vanadium in steels with the resorcylalhydrazide of anthranilic acid. The minimum determinable vanadium content is 0.18 micrograms/ml.« less
NASA Astrophysics Data System (ADS)
Rawling, Geoffrey C.; Newton, B. Talon
2016-06-01
The Sacramento Mountains and the adjacent Roswell Artesian Basin, in south-central New Mexico (USA), comprise a regional hydrologic system, wherein recharge in the mountains ultimately supplies water to the confined basin aquifer. Geologic, hydrologic, geochemical, and climatologic data were used to delineate the area of recharge in the southern Sacramento Mountains. The water-table fluctuation and chloride mass-balance methods were used to quantify recharge over a range of spatial and temporal scales. Extrapolation of the quantitative recharge estimates to the entire Sacramento Mountains region allowed comparison with previous recharge estimates for the northern Sacramento Mountains and the Roswell Artesian Basin. Recharge in the Sacramento Mountains is estimated to range from 159.86 × 106 to 209.42 × 106 m3/year. Both the location of recharge and range in estimates is consistent with previous work that suggests that ~75 % of the recharge to the confined aquifer in the Roswell Artesian Basin has moved downgradient through the Yeso Formation from distal recharge areas in the Sacramento Mountains. A smaller recharge component is derived from infiltration of streamflow beneath the major drainages that cross the Pecos Slope, but in the southern Sacramento Mountains much of this water is ultimately derived from spring discharge. Direct recharge across the Pecos Slope between the mountains and the confined basin aquifer is much smaller than either of the other two components.
NASA Astrophysics Data System (ADS)
Henson, S.; Sanders, R.; Madsen, E.; Le Moigne, F.; Quartly, G.
2012-04-01
A major term in the global carbon cycle is the ocean's biological carbon pump which is dominated by sinking of small organic particles from the surface ocean to its interior. Here we examine global patterns in particle export efficiency (PEeff), the proportion of primary production that is exported from the surface ocean, and transfer efficiency (Teff), the fraction of exported organic matter that reaches the deep ocean. This is achieved through extrapolating from in situ estimates of particulate organic carbon export to the global scale using satellite-derived data. Global scale estimates derived from satellite data show, in keeping with earlier studies, that PEeff is high at high latitudes and low at low latitudes, but that Teff is low at high latitudes and high at low latitudes. However, in contrast to the relationship observed for deep biomineral fluxes in previous studies, we find that Teff is strongly negatively correlated with opal export flux from the upper ocean, but uncorrelated with calcium carbonate export flux. We hypothesise that the underlying factor governing the spatial patterns observed in Teff is ecosystem function, specifically the degree of recycling occurring in the upper ocean, rather than the availability of calcium carbonate for ballasting. Finally, our estimate of global integrated carbon export is only 50% of previous estimates. The lack of consensus amongst different methodologies on the strength of the biological carbon pump emphasises that our knowledge of a major planetary carbon flux remains incomplete.
A reassessment of the emergence time of European bat lyssavirus type 1.
Hughes, Gareth J
2008-12-01
The previous study of the evolutionary rates of European bat lyssavirus type 1 (EBLV-1) used a strict molecular clock to estimate substitution rates of the nucleoprotein gene and in turn times of the most recent common ancestor (tMRCA) of the entire genotype and the two major EBLV-1 lineages (EBLV-1A and EBLV-1B). The results of that study suggested that the evolutionary rate of EBLV-1 was one of the lowest recorded for RNA viruses and that genetic diversity of EBLV-1 arose 500-750 years ago. Here I have shown that the use of a relaxed molecular clock (allowing branch rates to vary within a phylogeny) shows that these previous estimates should be revised. The relaxed clock provides a significantly better fit to all datasets. The substitution rate of EBLV-1B is compatible to that expected given previous estimates for the N gene of rabies virus whilst rate estimations for EBLV-1A appear to be confounded by substantial rate variation within the phylogeny. The relaxed clock substitution rate for EBLV-1 (1.1 x 10(-4)) is higher than had been estimated previously, and closer to that expected for the N gene. Moreover, tMRCA estimates for EBLV-1 are substantially reduced using the relaxed molecular clock (70-300 years) although the differing dynamics of EBLV-1A and EBLV-1B confound the confidence in this estimate. Current diversity of both EBLV-1A and EBLV-1B appears to have emerged within the last 100 years. Reconstruction of the population histories suggests that EBLV-1B may be emerging whilst the signal derived from the EBLV-1A phylogeny may be dampened by clade-specific dynamics.
Exact renormalization group equation for the Lifshitz critical point
NASA Astrophysics Data System (ADS)
Bervillier, C.
2004-10-01
An exact renormalization equation (ERGE) accounting for an anisotropic scaling is derived. The critical and tricritical Lifshitz points are then studied at leading order of the derivative expansion which is shown to involve two differential equations. The resulting estimates of the Lifshitz critical exponents compare well with the O(ε) calculations. In the case of the Lifshitz tricritical point, it is shown that a marginally relevant coupling defies the perturbative approach since it actually makes the fixed point referred to in the previous perturbative calculations O(ε) finally unstable.
Aylward, Lesa L; Brunet, Robert C; Starr, Thomas B; Carrier, Gaétan; Delzell, Elizabeth; Cheng, Hong; Beall, Colleen
2005-08-01
Recent studies demonstrating a concentration dependence of elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) suggest that previous estimates of exposure for occupationally exposed cohorts may have underestimated actual exposure, resulting in a potential overestimate of the carcinogenic potency of TCDD in humans based on the mortality data for these cohorts. Using a database on U.S. chemical manufacturing workers potentially exposed to TCDD compiled by the National Institute for Occupational Safety and Health (NIOSH), we evaluated the impact of using a concentration- and age-dependent elimination model (CADM) (Aylward et al., 2005) on estimates of serum lipid area under the curve (AUC) for the NIOSH cohort. These data were used previously by Steenland et al. (2001) in combination with a first-order elimination model with an 8.7-year half-life to estimate cumulative serum lipid concentration (equivalent to AUC) for these workers for use in cancer dose-response assessment. Serum lipid TCDD measurements taken in 1988 for a subset of the cohort were combined with the NIOSH job exposure matrix and work histories to estimate dose rates per unit of exposure score. We evaluated the effect of choices in regression model (regression on untransformed vs. ln-transformed data and inclusion of a nonzero regression intercept) as well as the impact of choices of elimination models and parameters on estimated AUCs for the cohort. Central estimates for dose rate parameters derived from the serum-sampled subcohort were applied with the elimination models to time-specific exposure scores for the entire cohort to generate AUC estimates for all cohort members. Use of the CADM resulted in improved model fits to the serum sampling data compared to the first-order models. Dose rates varied by a factor of 50 among different combinations of elimination model, parameter sets, and regression models. Use of a CADM results in increases of up to five-fold in AUC estimates for the more highly exposed members of the cohort compared to estimates obtained using the first-order model with 8.7-year half-life. This degree of variation in the AUC estimates for this cohort would affect substantially the cancer potency estimates derived from the mortality data from this cohort. Such variability and uncertainty in the reconstructed serum lipid AUC estimates for this cohort, depending on elimination model, parameter set, and regression model, have not been described previously and are critical components in evaluating the dose-response data from the occupationally exposed populations.
NASA Astrophysics Data System (ADS)
Goodall, H.; Gregory, L. C.; Wedmore, L.; Roberts, G.; Shanks, R. P.; McCaffrey, K. J. W.; Amey, R.; Hooper, A. J.
2017-12-01
The cosmogenic isotope chlorine-36 (36Cl) is increasingly used as a tool to investigate normal fault slip rates over the last 10-20 thousand years. These slip histories are being used to address complex questions, including investigating slip clustering and understanding local and large scale fault interaction. Measurements are time consuming and expensive, and as a result there has been little work done validating these 36Cl derived slip histories. This study aims to investigate if the results are repeatable and therefore reliable estimates of how normal faults have been moving in the past. Our approach is to test if slip histories derived from 36Cl are the same when measured at different points along the same fault. As normal fault planes are progressively exhumed from the surface they accumulate 36Cl. Modelling these 36Cl concentrations allows estimation of a slip history. In a previous study, samples were collected from four sites on the Magnola fault in the Italian Apennines. Remodelling of the 36Cl data using a Bayesian approach shows that the sites produced disparate slip histories, which we interpret as being due to variable site geomorphology. In this study, multiple sites have been sampled along the Campo Felice fault in the central Italian Apennines. Initial results show strong agreement between the sites we have processed so far and a previous study. This indicates that if sample sites are selected taking the geomorphology into account, then 36Cl derived slip histories will be highly similar when sampled at any point along the fault. Therefore our study suggests that 36Cl derived slip histories are a consistent record of fault activity in the past.
Modifying Taper-Derived Merchantable Height Estimates to Account for Tree Characteristics
James A. Westfall
2006-01-01
The U.S. Department of Agriculture Forest Service Northeastern Forest Inventory and Analysis program (NE-FIA) is developing regionwide tree-taper equations. Unlike most previous work on modeling tree form, this effort necessarily includes a wide array of tree species. For some species, branching patterns can produce undesirable tree form that reduces the merchantable...
Discrete return lidar-based prediction of leaf area index in two conifer forests
Jennifer L. R. Jensen; Karen S. Humes; Lee A. Vierling; Andrew T. Hudak
2008-01-01
Leaf area index (LAI) is a key forest structural characteristic that serves as a primary control for exchanges of mass and energy within a vegetated ecosystem. Most previous attempts to estimate LAI from remotely sensed data have relied on empirical relationships between field-measured observations and various spectral vegetation indices (SVIs) derived from optical...
Michael C. Stambaugh; Richard P. Guyette; Keith W. Grabner; Jeremy Kolaks
2006-01-01
Measuring success of fuels management is improved by understanding rates of litter accumulation and decay in relation to disturbance events. Despite the broad ecological importance of litter, little is known about the parameters of accumulation and decay rates in Ozark forests. Previously published estimates were used to derive accumulation rates and combined litter...
Standard deviations of composition measurements in atom probe analyses-Part II: 3D atom probe.
Danoix, F; Grancher, G; Bostel, A; Blavette, D
2007-09-01
In a companion paper [F. Danoix, G. Grancher, A. Bostel, D. Blavette, Surf. Interface Anal. this issue (previous paper).], the derivation of variances of the estimates of measured composition, and the underlying hypotheses, have been revisited in the the case of conventional one dimensional (1D) atom probes. In this second paper, we will concentrate on the analytical derivation of the variance when the estimate of composition is obtained from a 3D atom probe. As will be discussed, when the position information is available, compositions can be derived either from constant number of atoms, or from constant volume, blocks. The analytical treatment in the first case is identical to the one developed for conventional 1D instruments, and will not be discussed further in this paper. Conversely, in the second case, the analytical treatment is different, as well as the formula of the variance. In particular, it will be shown that the detection efficiency plays an important role in the determination of the variance.
Estimating earthquake magnitudes from reported intensities in the central and eastern United States
Boyd, Oliver; Cramer, Chris H.
2014-01-01
A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.
NASA Technical Reports Server (NTRS)
Della-Corte, Christopher
2012-01-01
Foil gas bearings are a key technology in many commercial and emerging oilfree turbomachinery systems. These bearings are nonlinear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness, and damping. Previous investigations led to an empirically derived method to estimate load capacity. This method has been a valuable tool in system development. The current work extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced oil-free machines operating on foil gas bearings.
Marsh, Kimberly; Mahy, Mary; Salomon, Joshua A.; Hogan, Daniel R.
2014-01-01
Objective(s): To assess differences between HIV prevalence estimates derived from national population surveys and antenatal care (ANC) surveillance sites and to improve the calibration of ANC-derived estimates in Spectrum 2013 to more appropriately account for differences between these data. Design: Retrospective analysis of national population survey and ANC surveillance data from 25 countries with generalized epidemics in sub-Saharan Africa and 8 countries with concentrated epidemics. Methods: Adult national population survey and ANC surveillance HIV prevalence estimates were compared for all available national population survey data points for the years 1999–2012. For sub-Saharan Africa, a mixed-effects linear regression model determined whether the relationship between national population and ANC estimates was constant across surveys. A new calibration method was developed to incorporate national population survey data directly into the likelihood for HIV prevalence in countries with generalized epidemics. Results were used to develop default rules for adjusting ANC data for countries with no national population surveys. Results: ANC surveillance data typically overestimate population prevalence, although a wide variation, particularly in rural areas, is observed across countries and survey years. The new calibration method yields similar point estimates to previous approaches, but leads to an average 44% increase in the width of 95% uncertainty intervals. Conclusion: Important biases remain in ANC surveillance data for HIV prevalence. The new approach to model-fitting in Spectrum 2013 more appropriately accounts for this bias when producing national estimates in countries with generalized epidemics. In countries with concentrated epidemics, local sex ratios should be used to calibrate ANC surveillance estimates. PMID:25203158
Fisher, Jason C.; Rousseau, Joseph P.; Bartholomay, Roy C.; Rattray, Gordon W.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, evaluated a three-dimensional model of groundwater flow in the fractured basalts and interbedded sediments of the eastern Snake River Plain aquifer at and near the Idaho National Laboratory to determine if model-derived estimates of groundwater movement are consistent with (1) results from previous studies on water chemistry type, (2) the geochemical mixing at an example well, and (3) independently derived estimates of the average linear groundwater velocity. Simulated steady-state flow fields were analyzed using backward particle-tracking simulations that were based on a modified version of the particle tracking program MODPATH. Model results were compared to the 5-microgram-per-liter lithium contour interpreted to represent the transition from a water type that is primarily composed of tributary valley underflow and streamflow-infiltration recharge to a water type primarily composed of regional aquifer water. This comparison indicates several shortcomings in the way the model represents flow in the aquifer. The eastward movement of tributary valley underflow and streamflow-infiltration recharge is overestimated in the north-central part of the model area and underestimated in the central part of the model area. Model inconsistencies can be attributed to large contrasts in hydraulic conductivity between hydrogeologic zones. Sources of water at well NPR-W01 were identified using backward particle tracking, and they were compared to the relative percentages of source water chemistry determined using geochemical mass balance and mixing models. The particle tracking results compare reasonably well with the chemistry results for groundwater derived from surface-water sources (-28 percent error), but overpredict the proportion of groundwater derived from regional aquifer water (108 percent error) and underpredict the proportion of groundwater derived from tributary valley underflow from the Little Lost River valley (-74 percent error). These large discrepancies may be attributed to large contrasts in hydraulic conductivity between hydrogeologic zones and (or) a short-circuiting of underflow from the Little Lost River valley to an area of high hydraulic conductivity. Independently derived estimates of the average groundwater velocity at 12 well locations within the upper 100 feet of the aquifer were compared to model-derived estimates. Agreement between velocity estimates was good at wells with travel paths located in areas of sediment-rich rock (root-mean-square error [RMSE] = 5.2 feet per day [ft/d]) and poor in areas of sediment-poor rock (RMSE = 26.2 ft/d); simulated velocities in sediment-poor rock were 2.5 to 4.5 times larger than independently derived estimates at wells USGS 1 (less than 14 ft/d) and USGS 100 (less than 21 ft/d). The models overprediction of groundwater velocities in sediment-poor rock may be attributed to large contrasts in hydraulic conductivity and a very large, model-wide estimate of vertical anisotropy (14,800).
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Demets, Charles; Gordon, Richard G.; Stein, Seth; Argus, Donald F.
1987-01-01
Marine magnetic profiles from the Gulf of Californa are studied in order to revise the estimate of Pacific-North America motion. It is found that since 3 Ma spreading has averaged 48 mm/yr, consistent with a new global plate motion model derived without any data. The present data suggest that strike-slip motion on faults west of the San Andreas is less than previously thought, reducing the San Andreas discrepancy with geodetic, seismological, and other geologic observations.
Automated assessment of noninvasive filling pressure using color Doppler M-mode echocardiography
NASA Technical Reports Server (NTRS)
Greenberg, N. L.; Firstenberg, M. S.; Cardon, L. A.; Zuckerman, J.; Levine, B. D.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Assessment of left ventricular filling pressure usually requires invasive hemodynamic monitoring to follow the progression of disease or the response to therapy. Previous investigations have shown accurate estimation of wedge pressure using noninvasive Doppler information obtained from the ratio of the wave propagation slope from color M-mode (CMM) images and the peak early diastolic filling velocity from transmitral Doppler images. This study reports an automated algorithm that derives an estimate of wedge pressure based on the spatiotemporal velocity distribution available from digital CMM Doppler images of LV filling.
Equation of State for RX-08-EL and RX-08-EP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, E.L.; Walton, J.
1985-05-07
JWL Equations of State (EOS's) have been estimated for RX-08-EL and RX-08-EP. The estimated JWL EOS parameters are listed. Previously, we derived a JWL EOS for RX-08-EN based on DYNA2D hydrodynamic code cylinder computations and comparisons with experimental cylinder test results are shown. The experimental cylinder shot results for RX-08-EL, shot K-473, were compared to the experimental cylinder shot results for RX-08-EN, shot K-463, as a reference. 10 figs., 6 tabs.
NASA Astrophysics Data System (ADS)
Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.
2008-05-01
Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.
Inventory and transport of plastic debris in the Laurentian Great Lakes.
Hoffman, Matthew J; Hittinger, Eric
2017-02-15
Plastic pollution in the world's oceans has received much attention, but there has been increasing concern about the high concentrations of plastic debris in the Laurentian Great Lakes. Using census data and methodologies used to study ocean debris we derive a first estimate of 9887 metric tonnes per year of plastic debris entering the Great Lakes. These estimates are translated into population-dependent particle inputs which are advected using currents from a hydrodynamic model to map the spatial distribution of plastic debris in the Great Lakes. Model results compare favorably with previously published sampling data. The samples are used to calibrate the model to derive surface microplastic mass estimates of 0.0211 metric tonnes in Lake Superior, 1.44 metric tonnes in Huron, and 4.41 metric tonnes in Erie. These results have many applications, including informing cleanup efforts, helping target pollution prevention, and understanding the inter-state or international flows of plastic pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.
Real-ear-to-coupler difference predictions as a function of age for two coupling procedures.
Bagatto, Marlene P; Scollie, Susan D; Seewald, Richard C; Moodie, K Shane; Hoover, Brenda M
2002-09-01
The predicted real-ear-to-coupler difference (RECD) values currently used in pediatric hearing instrument prescription methods are based on 12-month age range categories and were derived from measures using standard acoustic immittance probe tips. Consequently, the purpose of this study was to develop normative RECD predicted values for foam/acoustic immittance tips and custom earmolds across the age continuum. To this end, RECD data were collected on 392 infants and children (141 with acoustic immittance tips, 251 with earmolds) to develop normative regression equations for use in deriving continuous age predictions of RECDs for foam/acoustic immittance tips and earmolds. Owing to the substantial between-subject variability observed in the data, the predictive equations of RECDs by age (in months) resulted in only gross estimates of RECD values (i.e., within +/- 4.4 dB for 95% of acoustic immittance tip measures; within +/- 5.4 dB in 95% of measures with custom earmolds) across frequency. Thus, it is concluded that the estimates derived from this study should not be used to replace the more precise individual RECD measurements. Relative to previously available normative RECD values for infants and young children, however, the estimates derived through this study provide somewhat more accurate predicted values for use under those circumstances for which individual RECD measurements cannot be made.
NASA Technical Reports Server (NTRS)
Delaney, J. S.
1994-01-01
Oxygen is the most abundant element in most meteorites, yet the ratios of its isotopes are seldom used to constrain the compositional history of achondrites. The two major achondrite groups have O isotope signatures that differ from any plausible chondritic precursors and lie between the ordinary and carbonaceous chondrite domains. If the assumption is made that the present global sampling of chondritic meteorites reflects the variability of O reservoirs at the time of planetessimal/planet aggregation in the early nebula, then the O in these groups must reflect mixing between known chondritic reservoirs. This approach, in combination with constraints based on Fe-Mn-Mg systematics, has been used previously to model the composition of the basaltic achondrite parent body (BAP) and provides a model precursor composition that is generally consistent with previous eucrite parent body (EPB) estimates. The same approach is applied to Mars exploiting the assumption that the SNC and related meteorites sample the martian lithosphere. Model planet and planetesimal compositions can be derived by mixing of known chondritic components using O isotope ratios as the fundamental compositional constraint. The major- and minor-element composition for Mars derived here and that derived previously for the basaltic achondrite parent body are, in many respects, compatible with model compositions generated using completely independent constraints. The role of volatile elements and alkalis in particular remains a major difficulty in applying such models.
The Australian experiment with ETS-V
NASA Technical Reports Server (NTRS)
Vogel, Wolfhard J.; Goldhirsh, Julius; Hase, Yoshihiro
1989-01-01
Land-mobile satellite propagation measurements were implemented at L Band (1.5 GHz) in South-Eastern Australia during an 11 day period in October 1988. Transmissions (CW) from both the Japanese ETS-5 and INMARSAT Pacific geostationary satellites were accessed. Previous measurements in this series were performed at both L Band (1.5 GHz) and UHF (870 MHz) in Central Maryland, North-Central Colorado, and the southern United States. The objectives of the Australian campaign were to expand the data base acquired in the U.S. to another continent, to validate a U.S. derived empirical model for estimating the fade distribution, to establish the effects of directive antennas, to assess the isolation between co- and cross-polarized transmissions, to derive estimates of fade as well as non-fade durations, and to evaluate diversity reception. All these objectives were met.
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
NASA Astrophysics Data System (ADS)
Reitz, M. D.; Sanford, W. E.; Senay, G. B.; Cazenas, J.
2015-12-01
Evapotranspiration (ET) is a key quantity in the hydrologic cycle, accounting for ~70% of precipitation across the contiguous United States (CONUS). However, it is a challenge to estimate, due to difficulty in making direct measurements and gaps in our theoretical understanding. Here we present a new data-driven, ~1km2 resolution map of long-term average actual evapotranspiration rates across the CONUS. The new ET map is a function of the USGS Landsat-derived National Land Cover Database (NLCD), precipitation, temperature, and daily average temperature range (from the PRISM climate dataset), and is calibrated to long-term water balance data from 679 watersheds. It is unique from previously presented ET maps in that (1) it was co-developed with estimates of runoff and recharge; (2) the regression equation was chosen from among many tested, previously published and newly proposed functional forms for its optimal description of long-term water balance ET data; (3) it has values over open-water areas that are derived from separate mass-transfer and humidity equations; and (4) the data include additional precipitation representing amounts converted from 2005 USGS water-use census irrigation data. The regression equation is calibrated using data from 2000-2013, but can also be applied to individual years with their corresponding input datasets. Comparisons among this new map, the more detailed remote-sensing-based estimates of MOD16 and SSEBop, and AmeriFlux ET tower measurements shows encouraging consistency, and indicates that the empirical ET estimate approach presented here produces closer agreement with independent flux tower data for annual average actual ET than other more complex remote sensing approaches.
Estimation of carbon emissions from wildfires in Alaskan boreal forests using AVHRR data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasischke, E.S.; French, N.H.F.; Bourgeau-Chavez, L.L
1993-06-01
The objectives of this research study were to evaluate the utility of using AVHRR data for locating and measuring the areal extent of wildfires in the boreal forests of Alaska and to estimate the amount of carbon being released during these fires. Techniques were developed to using the normalized difference vegetation signature derived from AVHRR data to detect and measure the area of fires in Alaska. A model was developed to estimate the amount of biomass/carbon being stored in Alaskan boreal forests, and the amount of carbon released during fires. The AVHRR analysis resulted in detection of > 83% ofmore » all forest fires greater than 2,000 ha in size in the years 1990 and 1991. The areal estimate derived from AVHRR data were 75% of the area mapped by the Alaska Fire Service for these years. Using fire areas and locations for 1954 through 1992, it was determined that on average, 13.0 gm-C-m-2 of boreal forest area is released during fires every year. This estimate is two to six times greater than previous reported estimates. Our conclusions are that the analysis of AVHRR data represents a viable means for detecting and mapping fires in boreal regions on a global basis.« less
NASA Technical Reports Server (NTRS)
Prud'homme, Genevieve; Dobbin, Nina A.; Sun, Liu; Burnet, Richard T.; Martin, Randall V.; Davidson, Andrew; Cakmak, Sabit; Villeneuve, Paul J.; Lamsal, Lok N.; vanDonkelaar, Aaron;
2013-01-01
Satellite remote sensing (RS) has emerged as a cutting edge approach for estimating ground level ambient air pollution. Previous studies have reported a high correlation between ground level PM2.5 and NO2 estimated by RS and measurements collected at regulatory monitoring sites. The current study examined associations between air pollution and adverse respiratory and allergic health outcomes using multi-year averages of NO2 and PM2.5 from RS and from regulatory monitoring. RS estimates were derived using satellite measurements from OMI, MODIS, and MISR instruments. Regulatory monitoring data were obtained from Canada's National Air Pollution Surveillance Network. Self-reported prevalence of doctor-diagnosed asthma, current asthma, allergies, and chronic bronchitis were obtained from the Canadian Community Health Survey (a national sample of individuals 12 years of age and older). Multi-year ambient pollutant averages were assigned to each study participant based on their six digit postal code at the time of health survey, and were used as a marker for long-term exposure to air pollution. RS derived estimates of NO2 and PM2.5 were associated with 6e10% increases in respiratory and allergic health outcomes per interquartile range (3.97 mg m3 for PM2.5 and 1.03 ppb for NO2) among adults (aged 20e64) in the national study population. Risk estimates for air pollution and respiratory/ allergic health outcomes based on RS were similar to risk estimates based on regulatory monitoring for areas where regulatory monitoring data were available (within 40 km of a regulatory monitoring station). RS derived estimates of air pollution were also associated with adverse health outcomes among participants residing outside the catchment area of the regulatory monitoring network (p < 0.05).
A revised timescale for human evolution based on ancient mitochondrial genomes
Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2016-01-01
Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248
A revised timescale for human evolution based on ancient mitochondrial genomes.
Fu, Qiaomei; Mittnik, Alissa; Johnson, Philip L F; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2013-04-08
Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Here, we use mitochondrial genome sequences from ten securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) that occurred less than 62-95 kya. Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population divergence times, they can provide valid upper bounds. Our results exclude most of the older dates for African and non-African population divergences recently suggested by de novo mutation rate estimates in the nuclear genome. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chuter, S. J.; Martín-Español, A.; Wouters, B.; Bamber, J. L.
2017-07-01
We present a reassessment of input-output method ice mass budget estimates for the Abbot and Getz regions of West Antarctica using CryoSat-2-derived ice thickness estimates. The mass budget is 8 ± 6 Gt yr-1 and 5 ± 17 Gt yr-1 for the Abbot and Getz sectors, respectively, for the period 2006-2008. Over the Abbot region, our results resolve a previous discrepancy with elevation rates from altimetry, due to a previous 30% overestimation of ice thickness. For the Getz sector, our results are at the more positive bound of estimates from other techniques. Grounding line velocity increases up to 20% between 2007 and 2014 alongside mean elevation rates of -0.67 ± 0.13 m yr-1 between 2010 and 2013 indicate the onset of a dynamic thinning signal. Mean snowfall trends of -0.33 m yr-1 water equivalent since 2006 indicate recent mass trends are driven by both ice dynamics and surface processes.
The Dynamics of Glomerular Ultrafiltration in the Rat
Brenner, Barry M.; Troy, Julia L.; Daugharty, Terrance M.
1971-01-01
Using a unique strain of Wistar rats endowed with glomeruli situated directly on the renal cortical surface, we measured glomerular capillary pressures using servo-nulling micropipette transducer techniques. Pressures in 12 glomerular capillaries from 7 rats averaged 60 cm H2O, or approximately 50% of mean systemic arterial values. Wave form characteristics for these glomerular capillaries were found to be remarkably similar to those of the central aorta. From similarly direct estimates of hydrostatic pressures in proximal tubules, and colloid osmotic pressures in systemic and efferent arteriolar plasmas, the net driving force for ultrafiltration was calculated. The average value of 14 cm H2O is lower by some two-thirds than the majority of estimates reported previously based on indirect techniques. Single nephron GFR (glomerular filtration rate) was also measured in these rats, thereby permitting calculation of the glomerular capillary ultrafiltration coefficient. The average value of 0.044 nl sec−1 cm H2O−1 glomerulus−1 is at least fourfold greater than previous estimates derived from indirect observations. PMID:5097578
CO2 forcing induces semi-direct effects with consequences for climate feedback interpretations
NASA Astrophysics Data System (ADS)
Andrews, Timothy; Forster, Piers M.
2008-02-01
Climate forcing and feedbacks are diagnosed from seven slab-ocean GCMs for 2 × CO2 using a regression method. Results are compared to those using conventional methodologies to derive a semi-direct forcing due to tropospheric adjustment, analogous to the semi-direct effect of absorbing aerosols. All models show a cloud semi-direct effect, indicating a rapid cloud response to CO2; cloud typically decreases, enhancing the warming. Similarly there is evidence of semi-direct effects from water-vapour, lapse-rate, ice and snow. Previous estimates of climate feedbacks are unlikely to have taken these semi-direct effects into account and so misinterpret processes as feedbacks that depend only on the forcing, but not the global surface temperature. We show that the actual cloud feedback is smaller than what previous methods suggest and that a significant part of the cloud response and the large spread between previous model estimates of cloud feedback is due to the semi-direct forcing.
Occupational COPD and job exposure matrices: a systematic review and meta-analysis
Sadhra, Steven; Kurmi, Om P; Sadhra, Sandeep S; Lam, Kin Bong Hubert; Ayres, Jon G
2017-01-01
Background The association between occupational exposure and COPD reported previously has mostly been derived from studies relying on self-reported exposure to vapors, gases, dust, or fumes (VGDF), which could be subjective and prone to biases. The aim of this study was to assess the strength of association between exposure and COPD from studies that derived exposure by job exposure matrices (JEMs). Methods A systematic search of JEM-based occupational COPD studies published between 1980 and 2015 was conducted in PubMed and EMBASE, followed by meta-analysis. Meta-analysis was performed using a random-effects model, with results presented as a pooled effect estimate with 95% confidence intervals (CIs). The quality of study (risk of bias and confounding) was assessed by 13 RTI questionnaires. Heterogeneity between studies and its possible sources were assessed by Egger test and meta-regression, respectively. Results In all, 61 studies were identified and 29 were included in the meta-analysis. Based on JEM-based studies, there was 22% (pooled odds ratio =1.22; 95% CI 1.18–1.27) increased risk of COPD among those exposed to airborne pollutants arising from occupation. Comparatively, higher risk estimates were obtained for general populations JEMs (based on expert consensus) than workplace-based JEM were derived using measured exposure data (1.26; 1.20–1.33 vs 1.14; 1.10–1.19). Higher risk estimates were also obtained for self-reported exposure to VGDF than JEMs-based exposure to VGDF (1.91; 1.72–2.13 vs 1.10; 1.06–1.24). Dusts, particularly biological dusts (1.33; 1.17–1.51), had the highest risk estimates for COPD. Although the majority of occupational COPD studies focus on dusty environments, no difference in risk estimates was found for the common forms of occupational airborne pollutants. Conclusion Our findings highlight the need to interpret previous studies with caution as self-reported exposure to VGDF may have overestimated the risk of occupational COPD. PMID:28260879
Inferring thermodynamic stability relationship of polymorphs from melting data.
Yu, L
1995-08-01
This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.
Latitude and longitude vertical disparity
Read, Jenny C. A.; Phillipson, Graeme P.; Glennerster, Andrew
2010-01-01
The literature on vertical disparity is complicated by the fact that several different definitions of the term “vertical disparity” are in common use, often without a clear statement about which is intended or a widespread appreciation of the properties of the different definitions. Here, we examine two definitions of retinal vertical disparity: elevation-latitude and elevation-longitude disparity. Near the fixation point, these definitions become equivalent, but in general, they have quite different dependences on object distance and binocular eye posture, which have not previously been spelt out. We present analytical approximations for each type of vertical disparity, valid for more general conditions than previous derivations in the literature: we do not restrict ourselves to objects near the fixation point or near the plane of regard, and we allow for non-zero torsion, cyclovergence and vertical misalignments of the eyes. We use these expressions to derive estimates of the latitude and longitude vertical disparity expected at each point in the visual field, averaged over all natural viewing. Finally, we present analytical expressions showing how binocular eye position – gaze direction, convergence, torsion, cyclovergence, and vertical misalignment – can be derived from the vertical disparity field and its derivatives at the fovea. PMID:20055544
A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars
NASA Astrophysics Data System (ADS)
Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong
2016-04-01
A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.
NASA Technical Reports Server (NTRS)
DellaCorte, Christopher
2010-01-01
Foil gas bearings are a key technology in many commercial and emerging Oil-Free turbomachinery systems. These bearings are non-linear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness and damping. Previous investigations led to an empirically derived method, a rule-of-thumb, to estimate load capacity. This method has been a valuable tool in system development. The current paper extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced Oil-Free machines operating on foil gas bearings
Energy in synthetic fertilizers and pesticides: Revisited. Final project report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, M.G.; English, B.C.; Turhollow, A.F.
1994-01-01
Agricultural chemicals that are derived from fossil-fuels are the major energy intensive inputs in agriculture. Growing scarcity of the world`s fossil resources stimulated research and development of energy-efficient technology for manufacturing these chemicals in the last decade. The purpose of this study is to revisit the energy requirements of major plant nutrients and pesticides. The data from manufacturers energy survey conducted by The Fertilizer Institute are used to estimate energy requirements of fertilizers. Energy estimates for pesticides are developed from consulting previously published literature. The impact of technical innovation in the fertilizer industry to US corn, cotton, soybean and wheatmore » producers is estimated in terms of energy-saving.« less
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations
NASA Astrophysics Data System (ADS)
Mansfield, Christopher M.; Shoemaker, Christine A.
1999-05-01
This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.
Uncertainties of fluxes and 13C / 12C ratios of atmospheric reactive-gas emissions
NASA Astrophysics Data System (ADS)
Gromov, Sergey; Brenninkmeijer, Carl A. M.; Jöckel, Patrick
2017-07-01
We provide a comprehensive review of the proxy data on the 13C / 12C ratios and uncertainties of emissions of reactive carbonaceous compounds into the atmosphere, with a focus on CO sources. Based on an evaluated set-up of the EMAC model, we derive the isotope-resolved data set of its emission inventory for the 1997-2005 period. Additionally, we revisit the calculus required for the correct derivation of uncertainties associated with isotope ratios of emission fluxes. The resulting δ13C of overall surface CO emission in 2000 of -(25. 2 ± 0. 7) ‰ is in line with previous bottom-up estimates and is less uncertain by a factor of 2. In contrast to this, we find that uncertainties of the respective inverse modelling estimates may be substantially larger due to the correlated nature of their derivation. We reckon the δ13C values of surface emissions of higher hydrocarbons to be within -24 to -27 ‰ (uncertainty typically below ±1 ‰), with an exception of isoprene and methanol emissions being close to -30 and -60 ‰, respectively. The isotope signature of ethane surface emission coincides with earlier estimates, but integrates very different source inputs. δ13C values are reported relative to V-PDB.
NASA Technical Reports Server (NTRS)
Atlas, Robert (Technical Monitor); Joiner, Joanna; Vasikov, Alexander; Flittner, David; Gleason, James; Bhartia, P. K.
2002-01-01
Reliable cloud pressure estimates are needed for accurate retrieval of ozone and other trace gases using satellite-borne backscatter ultraviolet (buv) instruments such as the global ozone monitoring experiment (GOME). Cloud pressure can be derived from buv instruments by utilizing the properties of rotational-Raman scattering (RRS) and absorption by O2-O2. In this paper we estimate cloud pressure from GOME observations in the 355-400 nm spectral range using the concept of a Lambertian-equivalent reflectivity (LER) surface. GOME has full spectral coverage in this range at relatively high spectral resolution with a very high signal-to-noise ratio. This allows for much more accurate estimates of cloud pressure than were possible with its predecessors SBUV and TOMS. We also demonstrate the potential capability to retrieve chlorophyll content with full-spectral buv instruments. We compare our retrieved LER cloud pressure with cloud top pressures derived from the infrared ATSR instrument on the same satellite. The findings confirm results from previous studies that showed retrieved LER cloud pressures from buv observations are systematically higher than IR-derived cloud-top pressure. Simulations using Mie-scattering radiative transfer algorithms that include O2-O2 absorption and RRS show that these differences can be explained by increased photon path length within and below cloud.
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
Properties of added variable plots in Cox's regression model.
Lindkvist, M
2000-03-01
The added variable plot is useful for examining the effect of a covariate in regression models. The plot provides information regarding the inclusion of a covariate, and is useful in identifying influential observations on the parameter estimates. Hall et al. (1996) proposed a plot for Cox's proportional hazards model derived by regarding the Cox model as a generalized linear model. This paper proves and discusses properties of this plot. These properties make the plot a valuable tool in model evaluation. Quantities considered include parameter estimates, residuals, leverage, case influence measures and correspondence to previously proposed residuals and diagnostics.
Large and Small Magellanic Clouds age-metallicity relationships
NASA Astrophysics Data System (ADS)
Perren, G. I.; Piatti, A. E.; Vázquez, R. A.
2017-10-01
We present a new determination of the age-metallicity relation for both Magellanic Clouds, estimated through the homogeneous analysis of 239 observed star clusters. All clusters in our set were observed with the filters of the Washington photometric system. The Automated Stellar cluster Analysis package (ASteCA) was employed to derive the cluster's fundamental parameters, in particular their ages and metallicities, through an unassisted process. We find that our age-metallicity relations (AMRs) can not be fully matched to any of the estimations found in twelve previous works, and are better explained by a combination of several of them in different age intervals.
THE OPTICS OF REFRACTIVE SUBSTRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Michael D.; Narayan, Ramesh, E-mail: mjohnson@cfa.harvard.edu
2016-08-01
Newly recognized effects of refractive scattering in the ionized interstellar medium have broad implications for very long baseline interferometry (VLBI) at extreme angular resolutions. Building upon work by Blandford and Narayan, we present a simplified, geometrical optics framework, which enables rapid, semi-analytic estimates of refractive scattering effects. We show that these estimates exactly reproduce previous results based on a more rigorous statistical formulation. We then derive new expressions for the scattering-induced fluctuations of VLBI observables such as closure phase, and we demonstrate how to calculate the fluctuations for arbitrary quantities of interest using a Monte Carlo technique.
Gradient approach to quantify the gradation smoothness for output media
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun
2010-01-01
We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.
Crop weather models of barley and spring wheat yield for agrophysical units in North Dakota
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
Models based on multiple regression were developed to estimate barley yield and spring wheat yield from weather data for Agrophysical units(APU) in North Dakota. The predictor variables are derived from monthly average temperature and monthly total precipitation data at meteorological stations in the cooperative network. The models are similar in form to the previous models developed for Crop Reporting Districts (CRD). The trends and derived variables were the same and the approach to select the significant predictors was similar to that used in developing the CRD models. The APU models show sight improvements in some of the statistics of the models, e.g., explained variation. These models are to be independently evaluated and compared to the previously evaluated CRD models. The comparison will indicate the preferred model area for this application, i.e., APU or CRD.
Estimating the prevalence of infertility in Canada
Bushnik, Tracey; Cook, Jocelynn L.; Yuzpe, A. Albert; Tough, Suzanne; Collins, John
2012-01-01
BACKGROUND Over the past 10 years, there has been a significant increase in the use of assisted reproductive technologies in Canada, however, little is known about the overall prevalence of infertility in the population. The purpose of the present study was to estimate the prevalence of current infertility in Canada according to three definitions of the risk of conception. METHODS Data from the infertility component of the 2009–2010 Canadian Community Health Survey were analyzed for married and common-law couples with a female partner aged 18–44. The three definitions of the risk of conception were derived sequentially starting with birth control use in the previous 12 months, adding reported sexual intercourse in the previous 12 months, then pregnancy intent. Prevalence and odds ratios of current infertility were estimated by selected characteristics. RESULTS Estimates of the prevalence of current infertility ranged from 11.5% (95% CI 10.2, 12.9) to 15.7% (95% CI 14.2, 17.4). Each estimate represented an increase in current infertility prevalence in Canada when compared with previous national estimates. Couples with lower parity (0 or 1 child) had significantly higher odds of experiencing current infertility when the female partner was aged 35–44 years versus 18–34 years. Lower odds of experiencing current infertility were observed for multiparous couples regardless of age group of the female partner, when compared with nulliparous couples. CONCLUSIONS The present study suggests that the prevalence of current infertility has increased since the last time it was measured in Canada, and is associated with the age of the female partner and parity. PMID:22258658
NASA Technical Reports Server (NTRS)
Prigent, Catherine; Wigneron, Jean-Pierre; Rossow, William B.; Pardo-Carrion, Juan R.
1999-01-01
To retrieve temperature and humidity profiles from SSM/T and AMSU, it is important to quantify the contribution of the Earth surface emission. So far, no global estimates of the land surface emissivities are available at SSM/T and AMSU frequencies and scanning conditions. The land surface emissivities have been previously calculated for the globe from the SSM/I conical scanner between 19 and 85 GHz. To analyze the feasibility of deriving SSM/T and AMSU land surface emissivities from SSM/I emissivities, the spectral and angular variations of the emissivities are studied, with the help of ground-based measurements, models and satellite estimates. Up to 100 GHz, for snow and ice free areas, the SSM/T and AMSU emissivities can be derived with useful accuracy from the SSM/I emissivities- The emissivities can be linearly interpolated in frequency. Based on ground-based emissivity measurements of various surface types, a simple model is proposed to estimate SSM/T and AMSU emissivities for all zenith angles knowing only the emissivities for the vertical and horizontal polarizations at 53 deg zenith angle. The method is tested on the SSM/T-2 91.655 GHz channels. The mean difference between the SSM/T-2 and SSM/I-derived emissivities is less than or equal to 0.01 for all zenith angles with an r.m.s. difference of approx. = 0.02. Above 100 GHz, preliminary results are presented at 150 GHz, based on SSM/T-2 observations and are compared with the very few estimations available in the literature.
Westgate, Philip M.
2016-01-01
When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539
Westgate, Philip M
2016-01-01
When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.
Population ecology of the mallard: VII. Distribution and derivation of the harvest
Munro, Robert E.; Kimball, Charles F.
1982-01-01
This is the seventh in a series of comprehensive reports on population ecology of the mallard (Anas platyrhynchos) in North America. Banding records for 1961-1975 were used, together with information from previous reports in this series, to estimate annual and average preseason age and sex structure of the mallard population and patterns of harvest distribution and derivation. Age ratios in the pre-season population averaged 0.98 immatures per adult and ranged from 0.75 to 1.44. The adult male per female ration averaged 1.42. The young male per female ratio average 1.01. Geographic and annual differences in recovery distributions were associated with age, sex, and years after banding. Such variation might indicate that survival or band recovery rates, or both, change as a function of number of years after banding, and that estimates of these rates might thus be affected. Distribution of the mallard harvest from 16 major breeding ground reference areas to States, Provinces, and flyways is tabulated and illustrated. Seasonal (weekly) breeding ground derivation of the harvest within States and Provinces from the 16 reference areas also is tabulated. Harvest distributions, derivation, and similarity of derivation between harvest areas are summarily illustrated with maps. Derivation of harvest appears to be consistent throughout the hunting season in the middle and south central United States, encompassing States in both the Central and Mississippi flyways. However, weekly derivation patterns for most northern States suggest that early dates of hunting result in relatively greater harvest of locally derived mallard, in contrast to birds from more northern breeding areas.
Assessing Forest NPP: BIOME-BGC Predictions versus BEF Derived Estimates
NASA Astrophysics Data System (ADS)
Hasenauer, H.; Pietsch, S. A.; Petritsch, R.
2007-05-01
Forest productivity has always been a major issue within sustainable forest management. While in the past terrestrial forest inventory data have been the major source for assessing forest productivity, recent developments in ecosystem modeling offer an alternative approach using ecosystem models such as Biome-BGC to estimate Net Primary Production (NPP). In this study we compare two terrestrial driven approaches for assessing NPP: (i) estimates from a species specific adaptation of the biogeochemical ecosystem model BIOME-BGC calibrated for Alpine conditions; and (ii) NPP estimates derived from inventory data using biomass expansion factors (BEF). The forest inventory data come from 624 sample plots across Austria and consist of repeated individual tree observations and include growth as well as soil and humus information. These locations are covered with spruce, beech, oak, pine and larch stands, thus addressing the main Austrian forest types. 144 locations were previously used in a validating effort to produce species-specific parameter estimates of the ecosystem model. The remaining 480 sites are from the Austrian National Forest Soil Survey carried out at the Federal Research and Training Centre for Forests, Natural Hazards and Landscape (BFW). By using diameter at breast height (dbh) and height (h) volume and subsequently biomass of individual trees were calculated, aggregated for the whole forest stand and compared with the model output. Regression analyses were performed for both volume and biomass estimates.
Estimating stem volume and biomass of Pinus koraiensis using LiDAR data.
Kwak, Doo-Ahn; Lee, Woo-Kyun; Cho, Hyun-Kook; Lee, Seung-Ho; Son, Yowhan; Kafatos, Menas; Kim, So-Ra
2010-07-01
The objective of this study was to estimate the stem volume and biomass of individual trees using the crown geometric volume (CGV), which was extracted from small-footprint light detection and ranging (LiDAR) data. Attempts were made to analyze the stem volume and biomass of Korean Pine stands (Pinus koraiensis Sieb. et Zucc.) for three classes of tree density: low (240 N/ha), medium (370 N/ha), and high (1,340 N/ha). To delineate individual trees, extended maxima transformation and watershed segmentation of image processing methods were applied, as in one of our previous studies. As the next step, the crown base height (CBH) of individual trees has to be determined; information for this was found in the LiDAR point cloud data using k-means clustering. The LiDAR-derived CGV and stem volume can be estimated on the basis of the proportional relationship between the CGV and stem volume. As a result, low tree-density plots had the best performance for LiDAR-derived CBH, CGV, and stem volume (R (2) = 0.67, 0.57, and 0.68, respectively) and accuracy was lowest for high tree-density plots (R (2) = 0.48, 0.36, and 0.44, respectively). In the case of medium tree-density plots accuracy was R (2) = 0.51, 0.52, and 0.62, respectively. The LiDAR-derived stem biomass can be predicted from the stem volume using the wood basic density of coniferous trees (0.48 g/cm(3)), and the LiDAR-derived above-ground biomass can then be estimated from the stem volume using the biomass conversion and expansion factors (BCEF, 1.29) proposed by the Korea Forest Research Institute (KFRI).
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2016-09-01
In this paper, an algorithm based on the probability of rainfall intensities classification for rainfall estimation from Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) has been developed. The classification scheme uses various spectral parameters of SEVIRI that provide information about cloud top temperature and optical and microphysical cloud properties. The presented method is developed and trained for the north of Algeria. The calibration of the method is carried out using as a reference rain classification fields derived from radar for rainy season from November 2006 to March 2007. Rainfall rates are assigned to rain areas previously identified and classified according to the precipitation formation processes. The comparisons between satellite-derived precipitation estimates and validation data show that the developed scheme performs reasonably well. Indeed, the correlation coefficient presents a significant level (r:0.87). The values of POD, POFD and FAR are 80%, 13% and 25%, respectively. Also, for a rainfall estimation of about 614 mm, the RMSD, Bias, MAD and PD indicate 102.06(mm), 2.18(mm), 68.07(mm) and 12.58, respectively.
Palynofacies assemblages reflect sources of organic matter in New Zealand fjords
NASA Astrophysics Data System (ADS)
Prebble, Joseph G.; Hinojosa, Jessica L.; Moy, Christopher M.
2018-02-01
Understanding sources and transport pathways of organic carbon in fjord systems is important to quantify carbon cycling in coastal settings. Provenance of surficial sediment organic carbon in Fiordland National Park (southwestern New Zealand) has previously been estimated using a range of techniques, including mixing models derived from stable isotopes and lipid biomarker distributions. Here, we present the first application of palynofacies to explore the sources of particulate organic carbon to five fjords along the SW margin of New Zealand, to further discriminate the provenance of organic carbon in the fjords. We find good correlation between isotopic-and biomarker-derived proxies for organic carbon provenance and our new palynofacies observations. We observe strong down-fjord gradients of decreasing terrestrially derived organic carbon further from the river inflow at fjord heads. Fjords with small catchments and minor fresh water inflow exhibit reversed gradients, indicating that volume of freshwater entering at the fjord head is a primary mechanism to transport particulates down fjord rather than local transport from fjord sides. The palynofacies data also confirmed previously recorded latitudinal trends (i.e. between fjords), of less frequent and more weathered terrestrially derived organic carbon in the southern fjords, consistent with enhanced marine inflow and longer transport times in the southern catchments. Dinocyst assemblages also exhibit a strong latitudinal gradient, with assemblages dominated by heterotrophic forms in the north. In addition to providing support for previous studies, this approach allows finer discrimination of terrestrial organic carbon than previously, for example variation of leaf material. This study demonstrates that visual palynofacies analysis is a valuable tool to pinpoint origins of organic carbon in fjord systems, providing different but complementary information to other proxies.
Warping an atlas derived from serial histology to 5 high-resolution MRIs.
Tullo, Stephanie; Devenyi, Gabriel A; Patel, Raihaan; Park, Min Tae M; Collins, D Louis; Chakravarty, M Mallar
2018-06-19
Previous work from our group demonstrated the use of multiple input atlases to a modified multi-atlas framework (MAGeT-Brain) to improve subject-based segmentation accuracy. Currently, segmentation of the striatum, globus pallidus and thalamus are generated from a single high-resolution and -contrast MRI atlas derived from annotated serial histological sections. Here, we warp this atlas to five high-resolution MRI templates to create five de novo atlases. The overall goal of this work is to use these newly warped atlases as input to MAGeT-Brain in an effort to consolidate and improve the workflow presented in previous manuscripts from our group, allowing for simultaneous multi-structure segmentation. The work presented details the methodology used for the creation of the atlases using a technique previously proposed, where atlas labels are modified to mimic the intensity and contrast profile of MRI to facilitate atlas-to-template nonlinear transformation estimation. Dice's Kappa metric was used to demonstrate high quality registration and segmentation accuracy of the atlases. The final atlases are available at https://github.com/CobraLab/atlases/tree/master/5-atlas-subcortical.
Estimation of groundwater and nutrient fluxes to the Neuse River estuary, North Carolina
Spruill, T.B.; Bratton, J.F.
2008-01-01
A study was conducted between April 2004 and September 2005 to estimate groundwater and nutrient discharge to the Neuse River estuary in North Carolina. The largest groundwater fluxes were observed to occur generally within 20 m of the shoreline. Groundwater flux estimates based on seepage meter measurements ranged from 2.86??108 to 4.33??108 m3 annually and are comparable to estimates made using radon, a simple water-budget method, and estimates derived by using Darcy's Law and previously published general aquifer characteristics of the area. The lower groundwater flux estimate (equal to about 9 m3 s-1), which assumed the narrowest groundwater discharge zone (20 m) of three zone widths selected for an area west of New Bern, North Carolina, most closely agrees with groundwater flux estimates made using radon (3-9 m3 s-1) and Darcy's Law (about 9 m3 s-1). A groundwater flux of 9 m 3 s-1 is about 40% of the surface-water flow to the Neuse River estuary between Streets Ferry and the mouth of the estuary and about 7% of the surface-water inflow from areas upstream. Estimates of annual nitrogen (333 tonnes) and phosphorus (66 tonnes) fluxes from groundwater to the estuary, based on this analysis, are less than 6% of the nitrogen and phosphorus inputs derived from all sources (excluding oceanic inputs), and approximately 8% of the nitrogen and 17% of the phosphorus annual inputs from surface-water inflow to the Neuse River estuary assuming a mean annual precipitation of 1.27 m. We provide quantitative evidence, derived from three methods, that the contribution of water and nutrients from groundwater discharge to the Neuse River estuary is relatively minor, particularly compared with upstream sources of water and nutrients and with bottom sediment sources of nutrients. Locally high groundwater discharges do occur, however, and could help explain the occurrence of localized phytoplankton blooms, submerged aquatic vegetation, or fish kills.
NASA Astrophysics Data System (ADS)
Jaenisch, Holger; Handley, James
2013-06-01
We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.
A global trait-based approach to estimate leaf nitrogen functional allocation from observations
Ghimire, Bardan; Riley, William J.; Koven, Charles D.; ...
2017-03-28
Nitrogen is one of the most important nutrients for plant growth and a major constituent of proteins that regulate photosynthetic and respiratory processes. However, a comprehensive global analysis of nitrogen allocation in leaves for major processes with respect to different plant functional types is currently lacking. This study integrated observations from global databases with photosynthesis and respiration models to determine plant-functional-type-specific allocation patterns of leaf nitrogen for photosynthesis (Rubisco, electron transport, light absorption) and respiration (growth and maintenance), and by difference from observed total leaf nitrogen, an unexplained “residual” nitrogen pool. Based on our analysis, crops partition the largest fractionmore » of nitrogen to photosynthesis (57%) and respiration (5%) followed by herbaceous plants (44% and 4%). Tropical broadleaf evergreen trees partition the least to photosynthesis (25%) and respiration (2%) followed by needle-leaved evergreen trees (28% and 3%). In trees (especially needle-leaved evergreen and tropical broadleaf evergreen trees) a large fraction (70% and 73% respectively) of nitrogen was not explained by photosynthetic or respiratory functions. Compared to crops and herbaceous plants, this large residual pool is hypothesized to emerge from larger investments in cell wall proteins, lipids, amino acids, nucleic acid, CO2 fixation proteins (other than Rubisco), secondary compounds, and other proteins. Our estimates are different from previous studies due to differences in methodology and assumptions used in deriving nitrogen allocation estimates. Unlike previous studies, we integrate and infer nitrogen allocation estimates across multiple plant functional types, and report substantial differences in nitrogen allocation across different plant functional types. Furthermore, the resulting pattern of nitrogen allocation provides insights on mechanisms that operate at a cellular scale within leaves, and can be integrated with ecosystem models to derive emergent properties of ecosystem productivity at local, regional, and global scales.« less
A global trait-based approach to estimate leaf nitrogen functional allocation from observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghimire, Bardan; Riley, William J.; Koven, Charles D.
Nitrogen is one of the most important nutrients for plant growth and a major constituent of proteins that regulate photosynthetic and respiratory processes. However, a comprehensive global analysis of nitrogen allocation in leaves for major processes with respect to different plant functional types is currently lacking. This study integrated observations from global databases with photosynthesis and respiration models to determine plant-functional-type-specific allocation patterns of leaf nitrogen for photosynthesis (Rubisco, electron transport, light absorption) and respiration (growth and maintenance), and by difference from observed total leaf nitrogen, an unexplained “residual” nitrogen pool. Based on our analysis, crops partition the largest fractionmore » of nitrogen to photosynthesis (57%) and respiration (5%) followed by herbaceous plants (44% and 4%). Tropical broadleaf evergreen trees partition the least to photosynthesis (25%) and respiration (2%) followed by needle-leaved evergreen trees (28% and 3%). In trees (especially needle-leaved evergreen and tropical broadleaf evergreen trees) a large fraction (70% and 73% respectively) of nitrogen was not explained by photosynthetic or respiratory functions. Compared to crops and herbaceous plants, this large residual pool is hypothesized to emerge from larger investments in cell wall proteins, lipids, amino acids, nucleic acid, CO2 fixation proteins (other than Rubisco), secondary compounds, and other proteins. Our estimates are different from previous studies due to differences in methodology and assumptions used in deriving nitrogen allocation estimates. Unlike previous studies, we integrate and infer nitrogen allocation estimates across multiple plant functional types, and report substantial differences in nitrogen allocation across different plant functional types. Furthermore, the resulting pattern of nitrogen allocation provides insights on mechanisms that operate at a cellular scale within leaves, and can be integrated with ecosystem models to derive emergent properties of ecosystem productivity at local, regional, and global scales.« less
HUBBLE SPACE TELESCOPE FAR ULTRAVIOLET SPECTROSCOPY OF THE RECURRENT NOVA T PYXIDIS
Godon, Patrick; Sion, Edward M.; Starrfield, Sumner; Livio, Mario; Williams, Robert E.; Woodward, Charles E.; Kuin, Paul; Page, Kim L.
2018-01-01
With six recorded nova outbursts, the prototypical recurrent nova T Pyxidis (T Pyx) is the ideal cataclysmic variable system to assess the net change of the white dwarf mass within a nova cycle. Recent estimates of the mass ejected in the 2011 outburst ranged from a few ~10−5 M⊙ to 3.3 × 10−4 M⊙, and assuming a mass accretion rate of 10−8−10−7 M⊙ yr−1 for 44 yr, it has been concluded that the white dwarf in T Pyx is actually losing mass. Using NLTE disk modeling spectra to fit our recently obtained Hubble Space Telescope COS and STIS spectra, we find a mass accretion rate of up to two orders of magnitude larger than previously estimated. Our larger mass accretion rate is due mainly to the newly derived distance of T Pyx (4.8 kpc, larger than the previous 3.5 kpc estimate), our derived reddening of E(B − V) = 0.35 (based on combined IUE and GALEX spectra), and NLTE disk modeling (compared to blackbody and raw flux estimates in earlier works). We find that for most values of the reddening (0.25 ≤ E(B−V) ≤ 0.50) and white dwarf mass (0.70 M⊙ ≤ Mwd ≤ 1.35 M⊙) the accreted mass is larger than the ejected mass. Only for a low reddening (~0.25 and smaller) combined with a large white dwarf mass (0.9 M⊙ and larger) is the ejected mass larger than the accreted one. However, the best results are obtained for a larger value of reddening. PMID:29430290
HUBBLE SPACE TELESCOPE FAR ULTRAVIOLET SPECTROSCOPY OF THE RECURRENT NOVA T PYXIDIS.
Godon, Patrick; Sion, Edward M; Starrfield, Sumner; Livio, Mario; Williams, Robert E; Woodward, Charles E; Kuin, Paul; Page, Kim L
2014-04-01
With six recorded nova outbursts, the prototypical recurrent nova T Pyxidis (T Pyx) is the ideal cataclysmic variable system to assess the net change of the white dwarf mass within a nova cycle. Recent estimates of the mass ejected in the 2011 outburst ranged from a few ~10 -5 M ⊙ to 3.3 × 10 -4 M ⊙ , and assuming a mass accretion rate of 10 -8 -10 -7 M ⊙ yr -1 for 44 yr, it has been concluded that the white dwarf in T Pyx is actually losing mass. Using NLTE disk modeling spectra to fit our recently obtained Hubble Space Telescope COS and STIS spectra, we find a mass accretion rate of up to two orders of magnitude larger than previously estimated. Our larger mass accretion rate is due mainly to the newly derived distance of T Pyx (4.8 kpc, larger than the previous 3.5 kpc estimate), our derived reddening of E ( B - V ) = 0.35 (based on combined IUE and GALEX spectra), and NLTE disk modeling (compared to blackbody and raw flux estimates in earlier works). We find that for most values of the reddening (0.25 ≤ E ( B - V ) ≤ 0.50) and white dwarf mass (0.70 M ⊙ ≤ M wd ≤ 1.35 M ⊙ ) the accreted mass is larger than the ejected mass. Only for a low reddening (~0.25 and smaller) combined with a large white dwarf mass (0.9 M ⊙ and larger) is the ejected mass larger than the accreted one. However, the best results are obtained for a larger value of reddening.
Robo-AO Kepler Survey. IV. The Effect of Nearby Stars on 3857 Planetary Candidate Systems
NASA Astrophysics Data System (ADS)
Ziegler, Carl; Law, Nicholas M.; Baranec, Christoph; Riddle, Reed; Duev, Dmitry A.; Howard, Ward; Jensen-Clem, Rebecca; Kulkarni, S. R.; Morton, Tim; Salama, Maïssa
2018-04-01
We present the overall statistical results from the Robo-AO Kepler planetary candidate survey, comprising of 3857 high-angular resolution observations of planetary candidate systems with Robo-AO, an automated laser adaptive optics system. These observations reveal previously unknown nearby stars blended with the planetary candidate host stars that alter the derived planetary radii or may be the source of an astrophysical false positive transit signal. In the first three papers in the survey, we detected 440 nearby stars around 3313 planetary candidate host stars. In this paper, we present observations of 532 planetary candidate host stars, detecting 94 companions around 88 stars; 84 of these companions have not previously been observed in high resolution. We also report 50 more-widely separated companions near 715 targets previously observed by Robo-AO. We derive corrected planetary radius estimates for the 814 planetary candidates in systems with a detected nearby star. If planetary candidates are equally likely to orbit the primary or secondary star, the radius estimates for planetary candidates in systems with likely bound nearby stars increase by a factor of 1.54, on average. We find that 35 previously believed rocky planet candidates are likely not rocky due to the presence of nearby stars. From the combined data sets from the complete Robo-AO KOI survey, we find that 14.5 ± 0.5% of planetary candidate hosts have a nearby star with 4″, while 1.2% have two nearby stars, and 0.08% have three. We find that 16% of Earth-sized, 13% of Neptune-sized, 14% of Saturn-sized, and 19% of Jupiter-sized planet candidates have detected nearby stars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rampadarath, H.; Morgan, J. S.; Tingay, S. J.
2014-01-01
The results of multi-epoch observations of the southern starburst galaxy, NGC 253, with the Australian Long Baseline Array at 2.3 GHz are presented. As with previous radio interferometric observations of this galaxy, no new sources were discovered. By combining the results of this survey with Very Large Array observations at higher frequencies from the literature, spectra were derived and a free-free absorption model was fitted of 20 known sources in NGC 253. The results were found to be consistent with previous studies. The supernova remnant, 5.48-43.3, was imaged with the highest sensitivity and resolution to date, revealing a two-lobed morphology.more » Comparisons with previous observations of similar resolution give an upper limit of 10{sup 4} km s{sup –1} for the expansion speed of this remnant. We derive a supernova rate of <0.2 yr{sup –1} for the inner 300 pc using a model that improves on previous methods by incorporating an improved radio supernova peak luminosity distribution and by making use of multi-wavelength radio data spanning 21 yr. A star formation rate of SFR(M ≥ 5 M {sub ☉}) < 4.9 M {sub ☉} yr{sup –1} was also estimated using the standard relation between supernova and star formation rates. Our improved estimates of supernova and star formation rates are consistent with studies at other wavelengths. The results of our study point to the possible existence of a small population of undetected supernova remnants, suggesting a low rate of radio supernova production in NGC 253.« less
The Size Distribution of Near-Earth Objects Larger Than 10 m
NASA Astrophysics Data System (ADS)
Trilling, D. E.; Valdes, F.; Allen, L.; James, D.; Fuentes, C.; Herrera, D.; Axelrod, T.; Rajagopal, J.
2017-10-01
We analyzed data from the first year of a survey for Near-Earth Objects (NEOs) that we are carrying out with the Dark Energy Camera (DECam) on the 4 m Blanco telescope at the Cerro Tololo Inter-American Observatory. We implanted synthetic NEOs into the data stream to derive our nightly detection efficiency as a function of magnitude and rate of motion. Using these measured efficiencies and the solar system absolute magnitudes derived by the Minor Planet Center for the 1377 measurements of 235 unique NEOs detected, we directly derive, for the first time from a single observational data set, the NEO size distribution from 1 km down to 10 m. We find that there are {10}6.6 NEOs larger than 10 m. This result implies a factor of 10 fewer small NEOs than some previous results, though our derived size distribution is in good agreement with several other estimates.
Remnants of an Ancient Deltaretrovirus in the Genomes of Horseshoe Bats (Rhinolophidae).
Hron, Tomáš; Farkašová, Helena; Gifford, Robert J; Benda, Petr; Hulva, Pavel; Görföl, Tamás; Pačes, Jan; Elleder, Daniel
2018-04-10
Endogenous retrovirus (ERV) sequences provide a rich source of information about the long-term interactions between retroviruses and their hosts. However, most ERVs are derived from a subset of retrovirus groups, while ERVs derived from certain other groups remain extremely rare. In particular, only a single ERV sequence has been identified that shows evidence of being related to an ancient Deltaretrovirus , despite the large number of vertebrate genome sequences now available. In this report, we identify a second example of an ERV sequence putatively derived from a past deltaretroviral infection, in the genomes of several species of horseshoe bats (Rhinolophidae). This sequence represents a fragment of viral genome derived from a single integration. The time of the integration was estimated to be 11-19 million years ago. This finding, together with the previously identified endogenous Deltaretrovirus in long-fingered bats (Miniopteridae), suggest a close association of bats with ancient deltaretroviruses.
A new maximum-likelihood change estimator for two-pass SAR coherent change detection
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...
2016-01-11
In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less
Recommended approaches in the application of ...
ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine benchmark dose (BMD) and estimate a point of departure (POD). Several studies have shown that transcriptional PODs correlate with PODs derived from analysis of pathological changes, but there is no consensus on how the genes that are used to derive a transcriptional POD should be selected. Because of very large number of unrelated genes in gene expression data, the process of selecting subsets of informative genes is a major challenge. We used published microarray data from studies on rats exposed orally to multiple doses of six chemicals for 5, 14, 28, and 90 days. We evaluated eight different approaches to select genes for POD derivation and compared them to three previously proposed approaches. The relationship between transcriptional BMDs derived using these 11 approaches were compared with PODs derived from apical data that might be used in a human health risk assessment. We found that transcriptional benchmark dose values for all 11 approaches were remarkably aligned with different apical PODs, while a subset of between 3 and 8 of the approaches met standard statistical criteria across the 5-, 14-, 28-, and 90-day time points and thus qualify as effective estimates of apical PODs. Our r
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.; Shafer, M. F.
1976-01-01
In response to the interest in airplane configuration characteristics at high angles of attack, an unpowered remotely piloted 3/8-scale F-15 airplane model was flight tested. The subsonic stability and control characteristics of this airplane model over an angle of attack range of -20 to 53 deg are documented. The remotely piloted technique for obtaining flight test data was found to provide adequate stability and control derivatives. The remotely piloted technique provided an opportunity to test the aircraft mathematical model in an angle of attack regime not previously examined in flight test. The variation of most of the derivative estimates with angle of attack was found to be consistent, particularly when the data were supplemented by uncertainty levels.
Fitting power-laws in empirical data with estimators that work for all exponents
Hanel, Rudolf; Corominas-Murtra, Bernat; Liu, Bo; Thurner, Stefan
2017-01-01
Most standard methods based on maximum likelihood (ML) estimates of power-law exponents can only be reliably used to identify exponents smaller than minus one. The argument that power laws are otherwise not normalizable, depends on the underlying sample space the data is drawn from, and is true only for sample spaces that are unbounded from above. Power-laws obtained from bounded sample spaces (as is the case for practically all data related problems) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions. Here we first derive the appropriate ML estimator for arbitrary exponents of power-law distributions on bounded discrete sample spaces. We then show that an almost identical estimator also works perfectly for continuous data. We implemented this ML estimator and discuss its performance with previous attempts. We present a general recipe of how to use these estimators and present the associated computer codes. PMID:28245249
On the Methods for Estimating the Corneoscleral Limbus.
Jesus, Danilo A; Iskander, D Robert
2017-08-01
The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Modelling rainfall interception by forests: a new method for estimating the canopy storage capacity
NASA Astrophysics Data System (ADS)
Pereira, Fernando; Valente, Fernanda; Nóbrega, Cristina
2015-04-01
Evaporation of rainfall intercepted by forests is usually an important part of a catchment water balance. Recognizing the importance of interception loss, several models of the process have been developed. A key parameter of these models is the canopy storage capacity (S), commonly estimated by the so-called Leyton method. However, this method is somewhat subjective in the selection of the storms used to derive S, which is particularly critical when throughfall is highly variable in space. To overcome these problems, a new method for estimating S was proposed in 2009 by Pereira et al. (Agricultural and Forest Meteorology, 149: 680-688), which uses information from a larger number of storms, is less sensitive to throughfall spatial variability and is consistent with the formulation of the two most widely used rainfall interception models, Gash analytical model and Rutter model. However, this method has a drawback: it does not account for stemflow (Sf). To allow a wider use of this methodology, we propose now a revised version which makes the estimation of S independent of the importance of stemflow. For the application of this new version we only need to establish a linear regression of throughfall vs. gross rainfall using data from all storms large enough to saturate the canopy. Two of the parameters used by the Gash and the Rutter models, pd (the drainage partitioning coefficient) and S, are then derived from the regression coefficients: pd is firstly estimated allowing then the derivation of S but, if Sf is not considered, S can be estimated making pd= 0. This new method was tested using data from a eucalyptus plantation, a maritime pine forest and a traditional olive grove, all located in Central Portugal. For both the eucalyptus and the pine forests pd and S estimated by this new approach were comparable to the values derived in previous studies using the standard procedures. In the case of the traditional olive grove, the estimates obtained by this methodology for pd and S allowed interception loss to be modelled with a normalized averaged error less than 4%. Globally, these results confirm that the method is more robust and certainly less subjective, providing adequate estimates for pd and S which, in turn, are crucial for a good performance of the interception models.
A projection of lesser prairie chicken (Tympanuchus pallidicinctus) populations range-wide
Cummings, Jonathan W.; Converse, Sarah J.; Moore, Clinton T.; Smith, David R.; Nichols, Clay T.; Allan, Nathan L.; O'Meilia, Chris M.
2017-08-09
We built a population viability analysis (PVA) model to predict future population status of the lesser prairie-chicken (Tympanuchus pallidicinctus, LEPC) in four ecoregions across the species’ range. The model results will be used in the U.S. Fish and Wildlife Service's (FWS) Species Status Assessment (SSA) for the LEPC. Our stochastic projection model combined demographic rate estimates from previously published literature with demographic rate estimates that integrate the influence of climate conditions. This LEPC PVA projects declining populations with estimated population growth rates well below 1 in each ecoregion regardless of habitat or climate change. These results are consistent with estimates of LEPC population growth rates derived from other demographic process models. Although the absolute magnitude of the decline is unlikely to be as low as modeling tools indicate, several different lines of evidence suggest LEPC populations are declining.
Braun, Fabian; Proença, Martin; Adler, Andy; Riedel, Thomas; Thiran, Jean-Philippe; Solà, Josep
2018-01-01
Cardiac output (CO) and stroke volume (SV) are parameters of key clinical interest. Many techniques exist to measure CO and SV, but are either invasive or insufficiently accurate in clinical settings. Electrical impedance tomography (EIT) has been suggested as a noninvasive measure of SV, but inconsistent results have been reported. Our goal is to determine the accuracy and reliability of EIT-based SV measurements, and whether advanced image reconstruction approaches can help to improve the estimates. Data were collected on ten healthy volunteers undergoing postural changes and exercise. To overcome the sensitivity to heart displacement and thorax morphology reported in previous work, we used a 3D EIT configuration with 2 planes of 16 electrodes and subject-specific reconstruction models. Various EIT-derived SV estimates were compared to reference measurements derived from the oxygen uptake. Results revealed a dramatic impact of posture on the EIT images. Therefore, the analysis was restricted to measurements in supine position under controlled conditions (low noise and stable heart and lung regions). In these measurements, amplitudes of impedance changes in the heart and lung regions could successfully be derived from EIT using ECG gating. However, despite a subject-specific calibration the heart-related estimates showed an error of 0.0 ± 15.2 mL for absolute SV estimation. For trending of relative SV changes, a concordance rate of 80.9% and an angular error of -1.0 ± 23.0° were obtained. These performances are insufficient for most clinical uses. Similar conclusions were derived from lung-related estimates. Our findings indicate that the key difficulty in EIT-based SV monitoring is that purely amplitude-based features are strongly influenced by other factors (such as posture, electrode contact impedance and lung or heart conductivity). All the data of the present study are made publicly available for further investigations.
Proença, Martin; Adler, Andy; Riedel, Thomas; Thiran, Jean-Philippe; Solà, Josep
2018-01-01
Cardiac output (CO) and stroke volume (SV) are parameters of key clinical interest. Many techniques exist to measure CO and SV, but are either invasive or insufficiently accurate in clinical settings. Electrical impedance tomography (EIT) has been suggested as a noninvasive measure of SV, but inconsistent results have been reported. Our goal is to determine the accuracy and reliability of EIT-based SV measurements, and whether advanced image reconstruction approaches can help to improve the estimates. Data were collected on ten healthy volunteers undergoing postural changes and exercise. To overcome the sensitivity to heart displacement and thorax morphology reported in previous work, we used a 3D EIT configuration with 2 planes of 16 electrodes and subject-specific reconstruction models. Various EIT-derived SV estimates were compared to reference measurements derived from the oxygen uptake. Results revealed a dramatic impact of posture on the EIT images. Therefore, the analysis was restricted to measurements in supine position under controlled conditions (low noise and stable heart and lung regions). In these measurements, amplitudes of impedance changes in the heart and lung regions could successfully be derived from EIT using ECG gating. However, despite a subject-specific calibration the heart-related estimates showed an error of 0.0 ± 15.2 mL for absolute SV estimation. For trending of relative SV changes, a concordance rate of 80.9% and an angular error of −1.0 ± 23.0° were obtained. These performances are insufficient for most clinical uses. Similar conclusions were derived from lung-related estimates. Our findings indicate that the key difficulty in EIT-based SV monitoring is that purely amplitude-based features are strongly influenced by other factors (such as posture, electrode contact impedance and lung or heart conductivity). All the data of the present study are made publicly available for further investigations. PMID:29373611
NASA Astrophysics Data System (ADS)
White, Emily; Rigby, Matt; O'Doherty, Simon; Stavert, Ann; Lunt, Mark; Nemitz, Eiko; Helfter, Carole; Allen, Grant; Pitt, Joe; Bauguitte, Stéphane; Levy, Pete; van Oijen, Marcel; Williams, Mat; Smallman, Luke; Palmer, Paul
2016-04-01
Having a comprehensive understanding, on a countrywide scale, of both biogenic and anthropogenic CO2 emissions is essential for knowing how best to reduce anthropogenic emissions and for understanding how the terrestrial biosphere is responding to global fossil fuel emissions. Whilst anthropogenic CO2 flux estimates are fairly well constrained, fluxes from biogenic sources are not. This work will help to verify existing anthropogenic emissions inventories and give a better understanding of biosphere - atmosphere CO2 exchange. Using an innovative top-down inversion scheme; a hierarchical Bayesian Markov Chain Monte Carlo approach with reversible jump "trans-dimensional" basis function selection, we aim to find emissions estimates for biogenic and anthropogenic sources simultaneously. Our approach allows flux uncertainties to be derived more comprehensively than previous methods, and allows the resolved spatial scales in the solution to be determined using the data. We use atmospheric CO2 mole fraction data from the UK Deriving Emissions related to Climate Change (DECC) and Greenhouse gAs UK and Global Emissions (GAUGE) projects. The network comprises of 6 tall tower sites, flight campaigns and a ferry transect along the east coast, and enables us to derive high-resolution monthly flux estimates across the UK and Ireland for the period 2013-2015. We have derived UK total fluxes of 675 PIC 78 Tg/yr during January 2014 (seasonal maximum) and 23 PIC 96 Tg/yr during May 2014 (seasonal minimum). Our disaggregated anthropogenic and biogenic flux estimates are compared to a new high-resolution time resolved anthropogenic inventory that will underpin future UNFCCC reports by the UK, and to DALEC carbon cycle model. This allows us to identify where significant differences exist between these "bottom-up" and "top-down" flux estimates and suggest reasons for discrepancies. We will highlight the strengths and limitations of the UK's CO2 emissions verification infrastructure at present and outline improvements that could be made in the future.
Observations and implications of large-amplitude longitudinal oscillations in a solar filament
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luna, M.; Knizhnik, K.; Muglach, K.
On 2010 August 20, an energetic disturbance triggered large-amplitude longitudinal oscillations in a nearby filament. The triggering mechanism appears to be episodic jets connecting the energetic event with the filament threads. In the present work, we analyze this periodic motion in a large fraction of the filament to characterize the underlying physics of the oscillation as well as the filament properties. The results support our previous theoretical conclusions that the restoring force of large-amplitude longitudinal oscillations is solar gravity, and the damping mechanism is the ongoing accumulation of mass onto the oscillating threads. Based on our previous work, we usedmore » the fitted parameters to determine the magnitude and radius of curvature of the dipped magnetic field along the filament, as well as the mass accretion rate onto the filament threads. These derived properties are nearly uniform along the filament, indicating a remarkable degree of cohesiveness throughout the filament channel. Moreover, the estimated mass accretion rate implies that the footpoint heating responsible for the thread formation, according to the thermal nonequilibrium model, agrees with previous coronal heating estimates. We estimate the magnitude of the energy released in the nearby event by studying the dynamic response of the filament threads, and discuss the implications of our study for filament structure and heating.« less
NASA Technical Reports Server (NTRS)
Benedict, G. F.; McArthur, Barbara E.; Napiwotzki, Ralf; Harrison, Thomas E.; Harris, Hugh C.; Nelan, Edmund; Bond, Howard E; Patterson, Richard J.; Ciardullo, Robin
2009-01-01
We present absolute parallaxes and relative proper motions for the central stars of the planetary nebulae NGC 6853 (The Dumbbell), NGC 7293 (The Helix), Abell 31, and DeHt 5. This paper details our reduction and analysis using DeHt 5 as an example. We obtain these planetary nebula nuclei (PNNi) parallaxes with astrometric data from Fine Guidance Sensors FGS 1r and FGS 3, white-light interferometers on the Hubble Space Telescope. Proper motions, spectral classifications and VJHKT2M and DDO51 photometry of the stars comprising the astrometric reference frames provide spectrophotometric estimates of reference star absolute parallaxes. Introducing these into our model as observations with error, we determine absolute parallaxes for each PNN. Weighted averaging with previous independent parallax measurements yields an average parallax precision, sigma (sub pi)/ pi = 5%. Derived distances are: d(sub NGC6853) = 405(exp +28 sub -25) pc, d(sub NGC7293) = 216(exp +14 sub -12) pc, d(sub Abell31) = 621(exp +91 sub -70) pc, and d(sub DeHt5) = 345(exp +19 sub -17) pc. These PNNi distances are all smaller than previously derived from spectroscopic analyses of the central stars. To obtain absolute magnitudes from these distances requires estimates of interstellar extinction. We average extinction measurements culled from the literature, from reddening based on PNNi intrinsic colors derived from model SEDs, and an assumption that each PNN experiences the same rate of extinction as a function of distance as do the reference stars nearest (in angular separation) to each central star. We also apply Lutz-Kelker bias corrections. The absolute magnitudes and effective temperatures permit estimates of PNNi radii through both the Stefan-Boltzmann relation and Eddington fluxes. Comparing absolute magnitudes with post-AGB models provides mass estimates. Masses cluster around 0.57 solar Mass, close to the peak of the white dwarf mass distribution. Adding a few more PNNi with well-determined distances and masses, we compare all the PNNi with cooler white dwarfs of similar mass, and confirm, as expected, that PNNi have larger radii than white dwarfs that have reached their final cooling tracks.
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1993-01-01
Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.
A Web-based interface to calculate phonotactic probability for words and nonwords in English
VITEVITCH, MICHAEL S.; LUCE, PAUL A.
2008-01-01
Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html. PMID:15641436
NASA Technical Reports Server (NTRS)
Padfield, G. D.; Duval, R. K.
1982-01-01
A set of results on rotorcraft system identification is described. Flight measurements collected on an experimental Puma helicopter are reviewed and some notable characteristics highlighted. Following a brief review of previous work in rotorcraft system identification, the results of state estimation and model structure estimation processes applied to the Puma data are presented. The results, which were obtained using NASA developed software, are compared with theoretical predictions of roll, yaw and pitching moment derivatives for a 6 degree of freedom model structure. Anomalies are reported. The theoretical methods used are described. A framework for reduced order modelling is outlined.
Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.
2013-01-01
Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753
Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L
2013-05-01
Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.
Cadilhac, Dominique A; Carter, Rob; Thrift, Amanda G; Dewey, Helen M
2009-03-01
Stroke is associated with considerable societal costs. Cost-of-illness studies have been undertaken to estimate lifetime costs; most incorporating data up to 12 months after stroke. Costs of stroke, incorporating data collected up to 12 months, have previously been reported from the North East Melbourne Stroke Incidence Study (NEMESIS). NEMESIS now has patient-level resource use data for 5 years. We aimed to recalculate the long-term resource utilization of first-ever stroke patients and compare these to previous estimates obtained using data collected to 12 months. Population structure, life expectancy, and unit prices within the original cost-of-illness models were updated from 1997 to 2004. New Australian stroke survival and recurrence data up to 10 years were incorporated, as well as cross-sectional resource utilization data at 3, 4, and 5 years from NEMESIS. To enable comparisons, 1997 costs were inflated to 2004 prices and discounting was standardized. In 2004, 27 291 ischemic stroke (IS) and 4291 intracerebral hemorrhagic stroke (ICH) first-ever events were estimated. Average annual resource use after 12 months was AU$6022 for IS and AU$3977 for ICH. This is greater than the 1997 estimates for IS (AU$4848) and less than those for ICH (previously AU$10 692). The recalculated average lifetime costs per first-ever case differed for IS (AU$57 106 versus AU$52 855 [1997]), but differed more for ICH (AU$49 995 versus AU$92 308 [1997]). Basing lifetime cost estimates on short-term data overestimated the costs for ICH and underestimated those for IS. Patterns of resource use varied by stroke subtype and, overall, the societal cost impact was large.
Water Vapor Winds and Their Application to Climate Change Studies
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Lerner, Jeffrey A.
2000-01-01
The retrieval of satellite-derived winds and moisture from geostationary water vapor imagery has matured to the point where it may be applied to better understanding longer term climate changes that were previously not possible using conventional measurements or model analysis in data-sparse regions. In this paper, upper-tropospheric circulation features and moisture transport covering ENSO periods are presented and discussed. Precursors and other detectable interannual climate change signals are analyzed and compared to model diagnosed features. Estimates of winds and humidity over data-rich regions are used to show the robustness of the data and its value over regions that have previously eluded measurement.
NASA Astrophysics Data System (ADS)
Chi, Wu-Cheng
2016-04-01
A bottom-simulating reflector (BSR), representing the base of the gas hydrate stability zone, can be used to estimate geothermal gradients under seafloor. However, to derive temperature estimates at the BSR, the correct hydrate composition is needed to calculate the phase boundary. Here we applied the method by Minshull and Keddie to constrain the hydrate composition and the pore fluid salinity. We used a 3D seismic dataset offshore SW Taiwan to test the method. Different from previous studies, we have considered the effects of 3D topographic effects using finite element modelling and also depth-dependent thermal conductivity. Using a pore water salinity of 2% at the BSR depth as found from the nearby core samples, we successfully used 99% methane and 1% ethane gas hydrate phase boundary to derive a sub-bottom depth vs. temperature plot which is consistent with the seafloor temperature from in-situ measurements. The results are also consistent with geochemical analyses of the pore fluids. The derived regional geothermal gradient is 40.1oC/km, which is similar to 40oC/km used in the 3D finite element modelling used in this study. This study is among the first documented successful use of Minshull and Keddie's method to constrain seafloor gas hydrate composition.
Lina, Ioan A; Lauer, Amanda M
2013-04-01
The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.
Rankin, D; Ellis, S M; Macintyre, U E; Hanekom, S M; Wright, H H
2011-08-01
The objective of this study is to determine the relative validity of reported energy intake (EI) derived from multiple 24-h recalls against estimated energy expenditure (EE(est)). Basal metabolic rate (BMR) equations and physical activity factors were incorporated to calculate EE(est). This analysis was nested in the multidisciplinary PhysicaL Activity in the Young study with a prospective study design. Peri-urban black South African adolescents were investigated in a subsample of 131 learners (87 girls and 44 boys) from the parent study sample of 369 (211 girls and 158 boys) who had all measurements taken. Pearson correlation coefficients and Bland-Altman plots were calculated to identify the most accurate published equations to estimate BMR (P<0.05 statistically significant). EE(est) was estimated using BMR equations and estimated physical activity factors derived from Previous Day Physical Activity Recall questionnaires. After calculation of EE(est), the relative validity of reported energy intake (EI(rep)) derived from multiple 24-h recalls was tested for three data subsets using Pearson correlation coefficients. Goldberg's formula identified cut points (CPs) for under and over reporting of EI. Pearson correlation coefficients between calculated BMRs ranged from 0.97 to 0.99. Bland-Altman analyses showed acceptable agreement (two equations for each gender). One equation for each gender was used to calculate EE(est). Pearson correlation coefficients between EI(rep) and EE(est) for three data sets were weak, indicating poor agreement. CPs for physical activity groups showed under reporting in 87% boys and 95% girls. The 24-h recalls measured at five measurements over 2 years offered poor validity between EI(rep) and EE(est).
Stellar Parameters for Trappist-1
NASA Astrophysics Data System (ADS)
Van Grootel, Valérie; Fernandes, Catarina S.; Gillon, Michael; Jehin, Emmanuel; Manfroid, Jean; Scuflaire, Richard; Burgasser, Adam J.; Barkaoui, Khalid; Benkhaldoun, Zouhair; Burdanov, Artem; Delrez, Laetitia; Demory, Brice-Olivier; de Wit, Julien; Queloz, Didier; Triaud, Amaury H. M. J.
2018-01-01
TRAPPIST-1 is an ultracool dwarf star transited by seven Earth-sized planets, for which thorough characterization of atmospheric properties, surface conditions encompassing habitability, and internal compositions is possible with current and next-generation telescopes. Accurate modeling of the star is essential to achieve this goal. We aim to obtain updated stellar parameters for TRAPPIST-1 based on new measurements and evolutionary models, compared to those used in discovery studies. We present a new measurement for the parallax of TRAPPIST-1, 82.4 ± 0.8 mas, based on 188 epochs of observations with the TRAPPIST and Liverpool Telescopes from 2013 to 2016. This revised parallax yields an updated luminosity of {L}* =(5.22+/- 0.19)× {10}-4 {L}ȯ , which is very close to the previous estimate but almost two times more precise. We next present an updated estimate for TRAPPIST-1 stellar mass, based on two approaches: mass from stellar evolution modeling, and empirical mass derived from dynamical masses of equivalently classified ultracool dwarfs in astrometric binaries. We combine them using a Monte-Carlo approach to derive a semi-empirical estimate for the mass of TRAPPIST-1. We also derive estimate for the radius by combining this mass with stellar density inferred from transits, as well as an estimate for the effective temperature from our revised luminosity and radius. Our final results are {M}* =0.089+/- 0.006 {M}ȯ , {R}* =0.121+/- 0.003 {R}ȯ , and {T}{eff} = 2516 ± 41 K. Considering the degree to which the TRAPPIST-1 system will be scrutinized in coming years, these revised and more precise stellar parameters should be considered when assessing the properties of TRAPPIST-1 planets.
Estimating soil matric potential in Owens Valley, California
Sorenson, Stephen K.; Miller, R.F.; Welch, M.R.; Groeneveld, D.P.; Branson, F.A.
1988-01-01
Much of the floor of the Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first was the filter-paper method, which uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base 10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1 m depths derived by using the hand auger and filter paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter paper method could be obtained 90 to 95% of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures. (Lantz-PTT)
Mapping of the DLQI scores to EQ-5D utility values using ordinal logistic regression.
Ali, Faraz Mahmood; Kay, Richard; Finlay, Andrew Y; Piguet, Vincent; Kupfer, Joerg; Dalgard, Florence; Salek, M Sam
2017-11-01
The Dermatology Life Quality Index (DLQI) and the European Quality of Life-5 Dimension (EQ-5D) are separate measures that may be used to gather health-related quality of life (HRQoL) information from patients. The EQ-5D is a generic measure from which health utility estimates can be derived, whereas the DLQI is a specialty-specific measure to assess HRQoL. To reduce the burden of multiple measures being administered and to enable a more disease-specific calculation of health utility estimates, we explored an established mathematical technique known as ordinal logistic regression (OLR) to develop an appropriate model to map DLQI data to EQ-5D-based health utility estimates. Retrospective data from 4010 patients were randomly divided five times into two groups for the derivation and testing of the mapping model. Split-half cross-validation was utilized resulting in a total of ten ordinal logistic regression models for each of the five EQ-5D dimensions against age, sex, and all ten items of the DLQI. Using Monte Carlo simulation, predicted health utility estimates were derived and compared against those observed. This method was repeated for both OLR and a previously tested mapping methodology based on linear regression. The model was shown to be highly predictive and its repeated fitting demonstrated a stable model using OLR as well as linear regression. The mean differences between OLR-predicted health utility estimates and observed health utility estimates ranged from 0.0024 to 0.0239 across the ten modeling exercises, with an average overall difference of 0.0120 (a 1.6% underestimate, not of clinical importance). This modeling framework developed in this study will enable researchers to calculate EQ-5D health utility estimates from a specialty-specific study population, reducing patient and economic burden.
Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig
2018-05-11
Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
NASA Astrophysics Data System (ADS)
Greaves, Heather E.
Climate change is disproportionately affecting high northern latitudes, and the extreme temperatures, remoteness, and sheer size of the Arctic tundra biome have always posed challenges that make application of remote sensing technology especially appropriate. Advances in high-resolution remote sensing continually improve our ability to measure characteristics of tundra vegetation communities, which have been difficult to characterize previously due to their low stature and their distribution in complex, heterogeneous patches across large landscapes. In this work, I apply terrestrial lidar, airborne lidar, and high-resolution airborne multispectral imagery to estimate tundra vegetation characteristics for a research area near Toolik Lake, Alaska. Initially, I explored methods for estimating shrub biomass from terrestrial lidar point clouds, finding that a canopy-volume based algorithm performed best. Although shrub biomass estimates derived from airborne lidar data were less accurate than those from terrestrial lidar data, algorithm parameters used to derive biomass estimates were similar for both datasets. Additionally, I found that airborne lidar-based shrub biomass estimates were just as accurate whether calibrated against terrestrial lidar data or harvested shrub biomass--suggesting that terrestrial lidar potentially could replace destructive biomass harvest. Along with smoothed Normalized Differenced Vegetation Index (NDVI) derived from airborne imagery, airborne lidar-derived canopy volume was an important predictor in a Random Forest model trained to estimate shrub biomass across the 12.5 km2 covered by our lidar and imagery data. The resulting 0.80 m resolution shrub biomass maps should provide important benchmarks for change detection in the Toolik area, especially as deciduous shrubs continue to expand in tundra regions. Finally, I applied 33 lidar- and imagery-derived predictor layers in a validated Random Forest modeling approach to map vegetation community distribution at 20 cm resolution across the data collection area, creating maps that will enable validation of coarser maps, as well as study of fine-scale ecological processes in the area. These projects have pushed the limits of what can be accomplished for vegetation mapping using airborne remote sensing in a challenging but important region; it is my hope that the methods explored here will illuminate potential paths forward as landscapes and technologies inevitably continue to change.
Development of a 2001 National Land Cover Database for the United States
Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael
2004-01-01
Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.
Abundant carbon in the mantle beneath Hawai`i
NASA Astrophysics Data System (ADS)
Anderson, Kyle R.; Poland, Michael P.
2017-09-01
Estimates of carbon concentrations in Earth’s mantle vary over more than an order of magnitude, hindering our ability to understand mantle structure and mineralogy, partial melting, and the carbon cycle. CO2 concentrations in mantle-derived magmas supplying hotspot ocean island volcanoes yield our most direct constraints on mantle carbon, but are extensively modified by degassing during ascent. Here we show that undegassed magmatic and mantle carbon concentrations may be estimated in a Bayesian framework using diverse geologic information at an ocean island volcano. Our CO2 concentration estimates do not rely upon complex degassing models, geochemical tracer elements, assumed magma supply rates, or rare undegassed rock samples. Rather, we couple volcanic CO2 emission rates with probabilistic magma supply rates, which are obtained indirectly from magma storage and eruption rates. We estimate that the CO2 content of mantle-derived magma supplying Hawai`i’s active volcanoes is 0.97-0.19+0.25 wt%--roughly 40% higher than previously believed--and is supplied from a mantle source region with a carbon concentration of 263-62+81 ppm. Our results suggest that mantle plumes and ocean island basalts are carbon-rich. Our data also shed light on helium isotope abundances, CO2/Nb ratios, and may imply higher CO2 emission rates from ocean island volcanoes.
NASA Technical Reports Server (NTRS)
Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James
1991-01-01
Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).
On Using Exponential Parameter Estimators with an Adaptive Controller
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
Observers Exploit Stochastic Models of Sensory Change to Help Judge the Passage of Time
Ahrens, Misha B.; Sahani, Maneesh
2011-01-01
Summary Sensory stimulation can systematically bias the perceived passage of time [1–5], but why and how this happens is mysterious. In this report, we provide evidence that such biases may ultimately derive from an innate and adaptive use of stochastically evolving dynamic stimuli to help refine estimates derived from internal timekeeping mechanisms [6–15]. A simplified statistical model based on probabilistic expectations of stimulus change derived from the second-order temporal statistics of the natural environment [16, 17] makes three predictions. First, random noise-like stimuli whose statistics violate natural expectations should induce timing bias. Second, a previously unexplored obverse of this effect is that similar noise stimuli with natural statistics should reduce the variability of timing estimates. Finally, this reduction in variability should scale with the interval being timed, so as to preserve the overall Weber law of interval timing. All three predictions are borne out experimentally. Thus, in the context of our novel theoretical framework, these results suggest that observers routinely rely on sensory input to augment their sense of the passage of time, through a process of Bayesian inference based on expectations of change in the natural environment. PMID:21256018
Prediction future asset price which is non-concordant with the historical distribution
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah
2015-12-01
This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.
Exploring point-cloud features from partial body views for gender classification
NASA Astrophysics Data System (ADS)
Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga
2012-06-01
In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further investigation of more complex partial body viewing problems and new methods for estimating the two position coordinates for the axis location and the unknown body orientation angle.
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Kukharsky, Alexander; Uspensky, Sergey
2015-04-01
To date, physical-mathematical modeling processes of land surface-atmosphere interaction is considered to be the most appropriate tool for obtaining reliable estimates of water and heat balance components of large territories. The model of these processes (Land Surface Model, LSM) developed for vegetation period is destined for simulating soil water content W, evapotranspiration Ev, vertical latent LE and heat fluxes from land surface as well as vertically distributed soil temperature and moisture, soil surface Tg and foliage Tf temperatures, and land surface skin temperature (LST) Ts. The model is suitable for utilizing remote sensing data on land surface and meteorological conditions. In the study these data have been obtained from measurements by scanning radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/geostationary satellites Meteosat-9, -10 (MSG-2, -3). The heterogeneity of the land surface and meteorological conditions has been taken into account in the model by using soil and vegetation characteristics as parameters and meteorological characteristics as input variables. Values of these characteristics have been determined from ground observations and remote sensing information. So, AVHRR data have been used to build the estimates of effective land surface temperature (LST) Ts.eff and emissivity E, vegetation-air temperature (temperature at the vegetation level) Ta, normalized vegetation index NDVI, vegetation cover fraction B, the leaf area index LAI, and precipitation. From MODIS data the values of LST Tls, Å, NDVI, LAI have been derived. From SEVIRI data there have been retrieved Tls, E, Ta, NDVI, LAI and precipitation. All named retrievals covered the vast territory of the part of the agricultural Central Black Earth Region located in the steppe-forest zone of European Russia. This territory with coordinates 49°30'-54°N, 31°-43°E and a total area of 227,300 km2 has been chosen for investigation. It has been carried out for years 2009-2013 vegetation seasons. To provide the retrieval of Ts.eff, E, Ta, NDVI, B, and LAI the previously developed technologies of AVHRR data processing have been refined and adapted to the region of interest. The updated linear regression estimators for Ts.eff and Tà have been built using representative training samples compiled for above vegetation seasons. The updated software package has been applied for AVHRR data processing to generate estimates of named values. To verify the accuracy of these estimates the error statistics of Ts.eff and Ta derivation has been investigated for various days of named seasons using comparison with in-situ ground-based measurements. On the base of special technology and Internet resources the remote sensing products Tls, E, NDVI, LAI derived from MODIS data and covering the study area have been extracted from LP DAAC web-site for the same vegetation seasons. The reliability of the MODIS-derived Tls estimates has been confirmed via comparison with analogous and collocated ground-, AVHRR-, and SEVIRI-based ones. The prepared remote sensing dataset has also included the SEVIRI-derived estimates of Tls, E, NDVI, Ta at daylight and night-time and daily estimates of LAI. The Tls estimates has been built utilizing the method and technology developed for the retrieval of Tls and E from 15 minutes time interval SEVIRI data in IR channels 10.8 and 12.0 µm (classified as 100% cloud-free and covering the area of interest) at three successive times without accurate a priori knowledge of E. Comparison of the SEVIRI-based Tls retrievals with independent collocated Tls estimates generated at the Land Surface Analysis Satellite Applications Facility (LSA SAF, Lisbon, Portugal) has given daily- or monthly-averaged values of RMS deviation in the range of 2°C for various dates and months during the mentioned vegetation seasons which is quite acceptable result. The reliability of the SEVIRI-based Tls estimates for the study area has been also confirmed by comparing with AVHRR- and MODIS-derived LST estimates for the same seasons. The SEVIRI-derived values of Ta considered as the temperature of the vegetation cover has been obtained using Tls estimates and a previously found multiple linear regression relationship between Tls and Ta formulated accounting for solar zenith angle and land elevation. A comparison with ground-based collocated Ta observations has given RMS errors of 2.5°C and lower. It can be treated as a proof of the proposed technique's functionality. SEVIRI-derived LAI estimates have been retrieved at LSA SAF from measurements by this sensor in channels 0.6, 0.8, and 1.6 μm under cloud-free conditions at that when using data in the channel 1.6 μm the accuracy of these estimates has increased. In the study the AVHRR- and SEVIRI-derived estimates of daily and monthly precipitation sums for the territory under investigation for the years 2009 - 2013 vegetation seasons have been also used. These estimates have been obtained by the improved integrated Multi Threshold Method (MTM) providing detection and identification of cloud types around the clock throughout the year as well as identification of precipitation zones and determination of instantaneous precipitation maximum intensity within the pixel using the measurement data in different channels of named sensors as predictors. Validation of the MTM has been performed by comparing the daily and monthly precipitation sums with appropriate values resulted from ground-based observations at the meteorological stations of the region. The probability of detecting precipitation zones from satellite data corresponding to the actual ones has been amounted to 70-80%. AVHRR- and SEVIRI-derived daily and monthly precipitation sums have been in reasonable agreement with each other and with results of ground-based observations although they are smoother than the last values. Discrepancies have been noted only for local maxima for which satellite-based estimates of precipitation have been much less than ground-based ones. It may be due to the different spatial scales of areal satellite-derived and point ground-based estimates. To utilize satellite-derived vegetation and meteorological characteristics in the model the special procedures have been developed including: - replacement of ground-based LAI and B estimates used as model parameters by their satellite-derived estimates from AVHRR, MODIS and SEVIRI data. Correctness of such replacement has been confirmed by comparing the time behavior of LAI over the period of vegetation as well as modeled and measured values of evapotranspiration Ev and soil moisture content W; - entering AVHRR-, MODIS- and SEVIRI-derived estimates of Ts.eff Tls, and Ta into the model as input variables instead of ground-measured values with verification of adequacy of model operation under such a change through comparison of the calculated and measured values of W and Ev; - inputing satellite-derived estimates of precipitation during vegetation period retrieved from AVHRR and SEVIRI data using the MTM into the model as input variables. When developing given procedure algorithms and programs have been created to transit from assessment of the rainfall intensity to evaluation of its daily values. The implementation of such a transition requires controlling correctness of the estimates built at each time step. This control includes comparison of areal distributions of three-hour, daily and monthly precipitation amounts obtained from satellite data and calculated by interpolation of standard network observation data; - taking into account spatial heterogeneity of fields of satellite AVHRR-, MODIS- and SEVIRI-derived estimates of LAI, B, LST and precipitation. This has involved the development of algorithms and software for entering the values of all named characteristics into the model in each computational grid node. Values of evapotranspiration E, soil water content W, vertical latent and sensible heat fluxes and other water and heat balance components as well as land surface temperature and moisture area-distributed over the territory of interest have been resulted from the model calculations for the years 2009-2013 vegetation seasons. These calculations have been carried out utilizing satellite-derived estimates of the vegetation characteristics, LST and precipitation. E and W calculation errors have not exceeded the standard values.
Subtitle-Based Word Frequencies as the Best Estimate of Reading Behavior: The Case of Greek
Dimitropoulou, Maria; Duñabeitia, Jon Andoni; Avilés, Alberto; Corral, José; Carreiras, Manuel
2010-01-01
Previous evidence has shown that word frequencies calculated from corpora based on film and television subtitles can readily account for reading performance, since the language used in subtitles greatly approximates everyday language. The present study examines this issue in a society with increased exposure to subtitle reading. We compiled SUBTLEX-GR, a subtitled-based corpus consisting of more than 27 million Modern Greek words, and tested to what extent subtitle-based frequency estimates and those taken from a written corpus of Modern Greek account for the lexical decision performance of young Greek adults who are exposed to subtitle reading on a daily basis. Results showed that SUBTLEX-GR frequency estimates effectively accounted for participants’ reading performance in two different visual word recognition experiments. More importantly, different analyses showed that frequencies estimated from a subtitle corpus explained the obtained results significantly better than traditional frequencies derived from written corpora. PMID:21833273
Concentrations and Potential Health Risks of Metals in Lip Products
Liu, Sa; Rojas-Cheatham, Ann
2013-01-01
Background: Metal content in lip products has been an issue of concern. Objectives: We measured lead and eight other metals in a convenience sample of 32 lip products used by young Asian women in Oakland, California, and assessed potential health risks related to estimated intakes of these metals. Methods: We analyzed lip products by inductively coupled plasma optical emission spectrometry and used previous estimates of lip product usage rates to determine daily oral intakes. We derived acceptable daily intakes (ADIs) based on information used to determine public health goals for exposure, and compared ADIs with estimated intakes to assess potential risks. Results: Most of the tested lip products contained high concentrations of titanium and aluminum. All examined products had detectable manganese. Lead was detected in 24 products (75%), with an average concentration of 0.36 ± 0.39 ppm, including one sample with 1.32 ppm. When used at the estimated average daily rate, estimated intakes were > 20% of ADIs derived for aluminum, cadmium, chromium, and manganese. In addition, average daily use of 10 products tested would result in chromium intake exceeding our estimated ADI for chromium. For high rates of product use (above the 95th percentile), the percentages of samples with estimated metal intakes exceeding ADIs were 3% for aluminum, 68% for chromium, and 22% for manganese. Estimated intakes of lead were < 20% of ADIs for average and high use. Conclusions: Cosmetics safety should be assessed not only by the presence of hazardous contents, but also by comparing estimated exposures with health-based standards. In addition to lead, metals such as aluminum, cadmium, chromium, and manganese require further investigation. PMID:23674482
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
Models based on multiple regression were developed to estimate corn and soybean yield from weather data for agrophysical units (APU) in Iowa. The predictor variables are derived from monthly average temperature and monthly total precipitation data at meteorological stations in the cooperative network. The models are similar in form to the previous models developed for crop reporting districts (CRD). The trends and derived variables were the same and the approach to select the significant predictors was similar to that used in developing the CRD models. The APU's were selected to be more homogeneous with respect crop to production than the CRDs. The APU models are quite similar to the CRD models, similar explained variation and number of predictor variables. The APU models are to be independently evaluated and compared to the previously evaluated CRD models. That comparison should indicate the preferred model area for this application, i.e., APU or CRD.
Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J
2018-01-01
Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.
A new global 1-km dataset of percentage tree cover derived from remote sensing
DeFries, R.S.; Hansen, M.C.; Townshend, J.R.G.; Janetos, A.C.; Loveland, Thomas R.
2000-01-01
Accurate assessment of the spatial extent of forest cover is a crucial requirement for quantifying the sources and sinks of carbon from the terrestrial biosphere. In the more immediate context of the United Nations Framework Convention on Climate Change, implementation of the Kyoto Protocol calls for estimates of carbon stocks for a baseline year as well as for subsequent years. Data sources from country level statistics and other ground-based information are based on varying definitions of 'forest' and are consequently problematic for obtaining spatially and temporally consistent carbon stock estimates. By combining two datasets previously derived from the Advanced Very High Resolution Radiometer (AVHRR) at 1 km spatial resolution, we have generated a prototype global map depicting percentage tree cover and associated proportions of trees with different leaf longevity (evergreen and deciduous) and leaf type (broadleaf and needleleaf). The product is intended for use in terrestrial carbon cycle models, in conjunction with other spatial datasets such as climate and soil type, to obtain more consistent and reliable estimates of carbon stocks. The percentage tree cover dataset is available through the Global Land Cover Facility at the University of Maryland at http://glcf.umiacs.umd.edu.
Explicit error bounds for the α-quasi-periodic Helmholtz problem.
Lord, Natacha H; Mulholland, Anthony J
2013-10-01
This paper considers a finite element approach to modeling electromagnetic waves in a periodic diffraction grating. In particular, an a priori error estimate associated with the α-quasi-periodic transformation is derived. This involves the solution of the associated Helmholtz problem being written as a product of e(iαx) and an unknown function called the α-quasi-periodic solution. To begin with, the well-posedness of the continuous problem is examined using a variational formulation. The problem is then discretized, and a rigorous a priori error estimate, which guarantees the uniqueness of this approximate solution, is derived. In previous studies, the continuity of the Dirichlet-to-Neumann map has simply been assumed and the dependency of the regularity constant on the system parameters, such as the wavenumber, has not been shown. To address this deficiency, in this paper an explicit dependence on the wavenumber and the degree of the polynomial basis in the a priori error estimate is obtained. Since the finite element method is well known for dealing with any geometries, comparison of numerical results obtained using the α-quasi-periodic transformation with a lattice sum technique is then presented.
Steenland, Kyle; Burnett, Carol; Lalich, Nina; Ward, Elizabeth; Hurrell, Joseph
2003-05-01
Deaths due to occupational disease and injury place a heavy burden on society in terms of economic costs and human suffering. We estimate the annual deaths due to selected diseases for which an occupational association is reasonably well established and quantifiable, by calculation of attributable fractions (AFs), with full documentation; the deaths due to occupational injury are then added to derive an estimated number of annual deaths due to occupation. Using 1997 US mortality data, the estimated annual burden of occupational disease mortality resulting from selected respiratory diseases, cancers, cardiovascular disease, chronic renal failure, and hepatitis is 49,000, with a range from 26,000 to 72,000. The Bureau of Labor Statistics estimates there are about 6,200 work-related injury deaths annually. Adding disease and injury data, we estimate that there are a total of 55,200 US deaths annually resulting from occupational disease or injury (range 32,200-78,200). Our estimate is in the range reported by previous investigators, although we have restricted ourselves more than others to only those diseases with well-established occupational etiology, biasing our estimates conservatively. The underlying assumptions and data used to generate the estimates are well documented, so our estimates may be updated as new data emerges on occupational risks and exposed populations, providing an advantage over previous studies. We estimate that occupational deaths are the 8th leading cause of death in the US, after diabetes (64,751) but ahead of suicide (30,575), and greater than the annual number of motor vehicle deaths per year (43,501). Copyright 2003 Wiley-Liss, Inc.
Weber, Stephanie A; Insaf, Tabassum Z; Hall, Eric S; Talbot, Thomas O; Huff, Amy K
2016-11-01
An enhanced research paradigm is presented to address the spatial and temporal gaps in fine particulate matter (PM 2.5 ) measurements and generate realistic and representative concentration fields for use in epidemiological studies of human exposure to ambient air particulate concentrations. The general approach for research designed to analyze health impacts of exposure to PM 2.5 is to use concentration data from the nearest ground-based air quality monitor(s), which typically have missing data on the temporal and spatial scales due to filter sampling schedules and monitor placement, respectively. To circumvent these data gaps, this research project uses a Hierarchical Bayesian Model (HBM) to generate estimates of PM 2.5 in areas with and without air quality monitors by combining PM 2.5 concentrations measured by monitors, PM 2.5 concentration estimates derived from satellite aerosol optical depth (AOD) data, and Community-Multiscale Air Quality (CMAQ) model predictions of PM 2.5 concentrations. This methodology represents a substantial step forward in the approach for developing representative PM 2.5 concentration datasets to correlate with inpatient hospitalizations and emergency room visits data for asthma and inpatient hospitalizations for myocardial infarction (MI) and heart failure (HF) using case-crossover analysis. There were two key objective of this current study. First was to show that the inputs to the HBM could be expanded to include AOD data in addition to data from PM 2.5 monitors and predictions from CMAQ. The second objective was to determine if inclusion of AOD surfaces in HBM model algorithms results in PM 2.5 air pollutant concentration surfaces which more accurately predict hospital admittance and emergency room visits for MI, asthma, and HF. This study focuses on the New York City, NY metropolitan and surrounding areas during the 2004-2006 time period, in order to compare the health outcome impacts with those from previous studies and focus on any benefits derived from the changes in the HBM model surfaces. Consistent with previous studies, the results show high PM 2.5 exposure is associated with increased risk of asthma, myocardial infarction and heart failure. The estimates derived from concentration surfaces that incorporate AOD had a similar model fit and estimate of risk as compared to those derived from combining monitor and CMAQ data alone. Thus, this study demonstrates that estimates of PM 2.5 concentrations from satellite data can be used to supplement PM 2.5 monitor data in the estimates of risk associated with three common health outcomes. Results from this study were inconclusive regarding the potential benefits derived from adding AOD data to the HBM, as the addition of the satellite data did not significantly increase model performance. However, this study was limited to one metropolitan area over a short two-year time period. The use of next-generation, high temporal and spatial resolution satellite AOD data from geostationary and polar-orbiting satellites is expected to improve predictions in epidemiological studies in areas with fewer pollutant monitors or over wider geographic areas. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Development of a Multiple Input Integrated Pole-to-Pole Global CMORPH
NASA Astrophysics Data System (ADS)
Joyce, R.; Xie, P.
2013-12-01
A test system is being developed at NOAA Climate Prediction Center (CPC) to produce a passive microwave (PMW), IR-based, and model integrated high-resolution precipitation estimation on a 0.05olat/lon grid covering the entire globe from pole to pole. Experiments have been conducted for a summer Test Bed period using data for July and August of 2009. The pole-to-pole global CMORPH system is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). First, retrievals of instantaneous precipitation rates from PMW observations aboard nine low earth orbit (LEO) satellites are decoded and pole-to-pole mapped onto a 0.05olat/lon grid over the globe. Also precipitation estimates from LEO AVHRR retrievals are derived using a PDF matching of LEO IR with calibrated microwave combined (MWCOMB) precipitation retrievals. The motion vectors for the precipitating cloud systems are defined using information from both satellite IR observations and precipitation fields generated by the NCEP Climate Forecast System Reanalysis (CFSR). To this end, motion vectors are first computed for the CFSR hourly precipitation fields through cross-correlation analysis of consecutive hourly precipitation fields on the global T382 (~35 km) grid. In a similar manner, separate processing is also performed on satellite IR-based precipitation estimates to derive motion vectors from observations. A blended analysis of precipitating cloud motion vectors is then constructed through the combination of CFSR and satellite-derived vectors utilizing a two-dimensional optimal interpolation (2D-OI) method, in which CFSR-derived motion vectors are used as the first guess and subsequently satellite derived vectors modify the first guess. Weights used to generate the combinations are defined under the OI framework as a function of error statistics for the CFSR and satellite IR based motion vectors. The screened and calibrated PMW and AVHRR derived precipitation estimates are then separately spatially propagated forward and backward in time, using precipitating cloud motion vectors, from their observation time to the next PMW observation. The PMW estimates propagated in both the forward and backward directions are then combined with propagated IR-based precipitation estimates under the Kalman Filter framework, with weights defined based on previously determined error statistics dependent on latitude, season, surface type, and temporal distance from observation time. Performance of the pole-to-pole global CMORPH and its key components, including combined PMW (MWCOMB), IR-based, and model precipitation, as well as model-derived, IR-based, and blended precipitation motion vectors, will be examined against NSSL Q2 radar observed precipitation estimates over CONUS, Finland FMI radar precipitation, and a daily gauge-based analysis including daily Canadian surface reports over global land. Also an initial investigation will be performed over a January - February 2010 winter Test Bed period. Detailed results will be reported at the Fall 2013 AGU Meeting.
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
2017-08-19
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
NASA Astrophysics Data System (ADS)
Kotsuki, Shunji; Terasaki, Koji; Yashiro, Hasashi; Tomita, Hirofumi; Satoh, Masaki; Miyoshi, Takemasa
2017-04-01
This study aims to improve precipitation forecasts from numerical weather prediction (NWP) models through effective use of satellite-derived precipitation data. Kotsuki et al. (2016, JGR-A) successfully improved the precipitation forecasts by assimilating the Japan Aerospace eXploration Agency (JAXA)'s Global Satellite Mapping of Precipitation (GSMaP) data into the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) at 112-km horizontal resolution. Kotsuki et al. mitigated the non-Gaussianity of the precipitation variables by the Gaussian transform method for observed and forecasted precipitation using the previous 30-day precipitation data. This study extends the previous study by Kotsuki et al. and explores an online estimation of model parameters using ensemble data assimilation. We choose two globally-uniform parameters, one is the cloud-to-rain auto-conversion parameter of the Berry's scheme for large scale condensation and the other is the relative humidity threshold of the Arakawa-Schubert cumulus parameterization scheme. We perform the online-estimation of the two model parameters with an ensemble transform Kalman filter by assimilating the GSMaP precipitation data. The estimated parameters improve the analyzed and forecasted mixing ratio in the lower troposphere. Therefore, the parameter estimation would be a useful technique to improve the NWP models and their forecasts. This presentation will include the most recent progress up to the time of the symposium.
Byrne, Michael E; Cortés, Enric; Vaudo, Jeremy J; Harvey, Guy C McN; Sampson, Mark; Wetherbee, Bradley M; Shivji, Mahmood
2017-08-16
Overfishing is a primary cause of population declines for many shark species of conservation concern. However, means of obtaining information on fishery interactions and mortality, necessary for the development of successful conservation strategies, are often fisheries-dependent and of questionable quality for many species of commercially exploited pelagic sharks. We used satellite telemetry as a fisheries-independent tool to document fisheries interactions, and quantify fishing mortality of the highly migratory shortfin mako shark ( Isurus oxyrinchus ) in the western North Atlantic Ocean. Forty satellite-tagged shortfin mako sharks tracked over 3 years entered the Exclusive Economic Zones of 19 countries and were harvested in fisheries of five countries, with 30% of tagged sharks harvested. Our tagging-derived estimates of instantaneous fishing mortality rates ( F = 0.19-0.56) were 10-fold higher than previous estimates from fisheries-dependent data (approx. 0.015-0.024), suggesting data used in stock assessments may considerably underestimate fishing mortality. Additionally, our estimates of F were greater than those associated with maximum sustainable yield, suggesting a state of overfishing. This information has direct application to evaluations of stock status and for effective management of populations, and thus satellite tagging studies have potential to provide more accurate estimates of fishing mortality and survival than traditional fisheries-dependent methodology. © 2017 The Author(s).
Nishiyama, K K; Macdonald, H M; Hanley, D A; Boyd, S K
2013-05-01
High-resolution peripheral quantitative computed tomography (HR-pQCT) measurements of distal radius and tibia bone microarchitecture and finite element (FE) estimates of bone strength performed well at classifying postmenopausal women with and without previous fracture. The HR-pQCT measurements outperformed dual energy x-ray absorptiometry (DXA) at classifying forearm fractures and fractures at other skeletal sites. Areal bone mineral density (aBMD) is the primary measurement used to assess osteoporosis and fracture risk; however, it does not take into account bone microarchitecture, which also contributes to bone strength. Thus, our objective was to determine if bone microarchitecture measured with HR-pQCT and FE estimates of bone strength could classify women with and without low-trauma fractures. We used HR-pQCT to assess bone microarchitecture at the distal radius and tibia in 44 postmenopausal women with a history of low-trauma fracture and 88 age-matched controls from the Calgary cohort of the Canadian Multicentre Osteoporosis Study (CaMos) study. We estimated bone strength using FE analysis and simulated distal radius aBMD from the HR-pQCT scans. Femoral neck (FN) and lumbar spine (LS) aBMD were measured with DXA. We used support vector machines (SVM) and a tenfold cross-validation to classify the fracture cases and controls and to determine accuracy. The combination of HR-pQCT measures of microarchitecture and FE estimates of bone strength had the highest area under the receiver operating characteristic (ROC) curve of 0.82 when classifying forearm fractures compared to an area under the curve (AUC) of 0.71 from DXA-derived aBMD of the forearm and 0.63 from FN and spine DXA. For all fracture types, FE estimates of bone strength at the forearm alone resulted in an AUC of 0.69. Models based on HR-pQCT measurements of bone microarchitecture and estimates of bone strength performed better than DXA-derived aBMD at classifying women with and without prior fracture. In future, these models may improve prediction of individuals at risk of low-trauma fracture.
Excess mortality during the warm summer of 2015 in Switzerland.
Vicedo-Cabrera, Ana M; Ragettli, Martina S; Schindler, Christian; Röösli, Martin
2016-01-01
In Switzerland, summer 2015 was the second warmest summer for 150 years (after summer 2003). For summer 2003, a 6.9% excess mortality was estimated for Switzerland, which corresponded to 975 extra deaths. The impact of the heat in summer 2015 in Switzerland has not so far been evaluated. Daily age group-, gender- and region-specific all-cause excess mortality during summer (June-August) 2015 was estimated, based on predictions derived from quasi-Poisson regression models fitted to the daily mortality data for the 10 previous years. Estimates of excess mortality were derived for 1 June to 31 August, at national and regional level, as well as by month and for specific heat episodes identified in summer 2015 by use of seven different definitions. 804 excess deaths (5.4%, 95% confidence interval [CI] 3.0‒7.9%) were estimated for summer 2015 compared with previous summers, with the highest percentage obtained for July (11.6%, 95% CI 3.7‒19.4%). Seventy-seven percent of deaths occurred in people aged 75 years and older. Ticino (10.3%, 95% CI -1.8‒22.4%), Northwestern Switzerland (9.5%, 95% CI 2.7‒16.3%) and Espace Mittelland (8.9%, 95% CI 3.7‒14.1%) showed highest excess mortality during this three-month period, whereas fewer deaths than expected (-3.3%, 95% CI -9.2‒2.6%) were observed in Eastern Switzerland, the coldest region. The largest excess estimate of 23.7% was obtained during days when both maximum apparent and minimum night-time temperature reached extreme values (+32 and +20 °C, respectively), with 31.0% extra deaths for periods of three days or more. Heat during summer 2015 was associated with an increase in mortality in the warmer regions of Switzerland and it mainly affected older people. Estimates for 2015 were only a little lower compared to those of summer 2003, indicating that mitigation measures to prevent heat-related mortality in Switzerland have not become noticeably effective in the last 10 years.
Papadopoulou, Eleni; Poothong, Somrutai; Koekkoek, Jacco; Lucattini, Luisa; Padilla-Sánchez, Juan Antonio; Haugen, Margaretha; Herzke, Dorte; Valdersnes, Stig; Maage, Amund; Cousins, Ian T; Leonards, Pim E G; Småstuen Haug, Line
2017-10-01
Diet is a major source of human exposure to hazardous environmental chemicals, including many perfluoroalkyl acids (PFAAs). Several assessment methods of dietary exposure to PFAAs have been used previously, but there is a lack of comparisons between methods. To assess human exposure to PFAAs through diet by different methods and compare the results. We studied the dietary exposure to PFAAs in 61 Norwegian adults (74% women, average age: 42 years) using three methods: i) by measuring daily PFAA intakes through a 1-day duplicate diet study (separately in solid and liquid foods), ii) by estimating intake after combining food contamination with food consumption data, as assessed by 2-day weighted food diaries and iii) by a Food Frequency Questionnaire (FFQ). We used existing food contamination data mainly from samples purchased in Norway and if not available, data from food purchased in other European countries were used. Duplicate diet samples (n=122) were analysed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) to quantify 15 PFAAs (11 perfluoroalkyl carboxylates and 4 perfluoroalkyl sulfonates). Differences and correlations between measured and estimated intakes were assessed. The most abundant PFAAs in the duplicate diet samples were PFOA, PFOS and PFHxS and the median total intakes were 5.6ng/day, 11ng/day and 0.78ng/day, respectively. PFOS and PFOA concentrations were higher in solid than liquid samples. PFOS was the main contributor to the contamination in the solid samples (median concentration 14pg/g food), while it was PFOA in the liquid samples (median concentrations: 0.72pg/g food). High intakes of fats, oils, and eggs were statistically significantly related to high intakes of PFOS and PFOA from solid foods. High intake of milk and consumption of alcoholic beverages, as well as food in paper container were related to high PFOA intakes from liquid foods. PFOA intakes derived from food diary and FFQ were significantly higher than those derived from duplicate diet, but intakes of PFOS derived from food diary and FFQ were significantly lower than those derived from duplicate diet. We found a positive and statistically significant correlation between the PFOS intakes derived from duplicate diet with those using the food diary (rho=0.26, p-value=0.041), but not with the FFQ. Additionally, PFOA intakes derived by duplicate diet were significantly correlated with estimated intakes from liquid food derived from the food diary (rho=0.34, p=0.008) and estimated intakes from the FFQ (rho=0.25, p-value=0.055). We provide evidence that a food diary or a FFQ-based method can provide comparable intake estimates to PFOS and PFOA intakes derived from a duplicate diet study. These less burdensome methods are valuable and reliable tools to assess dietary exposure to PFASs in human studies. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James
2015-02-01
This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the entire catalogue length.
NASA Astrophysics Data System (ADS)
Leedham Elvidge, Emma; Bönisch, Harald; Brenninkmeijer, Carl A. M.; Engel, Andreas; Fraser, Paul J.; Gallacher, Eileen; Langenfelds, Ray; Mühle, Jens; Oram, David E.; Ray, Eric A.; Ridley, Anna R.; Röckmann, Thomas; Sturges, William T.; Weiss, Ray F.; Laube, Johannes C.
2018-03-01
In a changing climate, potential stratospheric circulation changes require long-term monitoring. Stratospheric trace gas measurements are often used as a proxy for stratospheric circulation changes via the mean age of air
values derived from them. In this study, we investigated five potential age of air tracers - the perfluorocarbons CF4, C2F6 and C3F8 and the hydrofluorocarbons CHF3 (HFC-23) and HFC-125 - and compare them to the traditional tracer SF6 and a (relatively) shorter-lived species, HFC-227ea. A detailed uncertainty analysis was performed on mean ages derived from these new
tracers to allow us to confidently compare their efficacy as age tracers to the existing tracer, SF6. Our results showed that uncertainties associated with the mean age derived from these new age tracers are similar to those derived from SF6, suggesting that these alternative compounds are suitable in this respect for use as age tracers. Independent verification of the suitability of these age tracers is provided by a comparison between samples analysed at the University of East Anglia and the Scripps Institution of Oceanography. All five tracers give younger mean ages than SF6, a discrepancy that increases with increasing mean age. Our findings qualitatively support recent work that suggests that the stratospheric lifetime of SF6 is significantly less than the previous estimate of 3200 years. The impact of these younger mean ages on three policy-relevant parameters - stratospheric lifetimes, fractional release factors (FRFs) and ozone depletion potentials - is investigated in combination with a recently improved methodology to calculate FRFs. Updates to previous estimations for these parameters are provided.
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2017-07-01
This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.
NASA Astrophysics Data System (ADS)
Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed
2017-01-01
This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
Quantifying lost information due to covariance matrix estimation in parameter inference
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2017-02-01
Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible.
NASA Astrophysics Data System (ADS)
Ito, A.; Akimoto, H.
2006-12-01
We estimate the emissions of black carbon (BC) from open vegetation fires in southern hemisphere Africa from 1998 to 2005 using satellite information in conjunction with a biogeochemical model. Monthly burned areas at a 0.5-degree resolution are estimated from the Visible and Infrared Scanner (VIRS) fire count product and the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area data set associated with the MODIS tree cover imagery in grasslands and woodlands. The monthly fuel load distribution is derived from a 0.5- degree terrestrial carbon cycle model in conjunction with satellite data. The monthly maps of combustion factor and emission factor are estimated using empirical models that predict the effects of fuel conditions on these factors in grasslands and woodlands. Our annual averaged BC emitted per unit area burned is 0.17 g BC m-2 which is consistent with the product of fuel consumption and emission factor typically measured in southern Africa. The BC emissions from open vegetation burning in southern Africa ranged from 0.26 Tg BC yr-1 for 2002 to 0.42 Tg BC yr-1 for 1998. The peak in BC emissions is identical to that from previous top-down estimate using the Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data. The sum of monthly emissions during the burning season in 2000 is in good agreement between our estimate (0.38 Tg) and previous estimate constrained by numerical model and measurements (0.47 Tg).
Peters, Susan; Kromhout, Hans; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Vermeulen, Roel
2013-01-01
We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m(3) for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case-control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation ((Rp)) and differences in unit of exposure (mg/m(3)-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m(3)-years, with a median of 1.76 mg/m(3)-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (R(p) > 0.90), although somewhat lower when omitting the region estimate ((Rp) = 0.80) or not taking into account the assigned semi-quantitative exposure level (R(p) = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26-33% difference), but without changing the relative ranking ((Rp) = 0.99). Exposure estimates derived from SYN-JEM appeared to be plausible compared with (historical) levels described in the literature. Decisions taken in the development of SYN-JEM did not critically change the cumulative exposure levels. The influence of region-specific estimates needs to be explored in future risk analyses.
Topical cyclone rainfall characteristics as determined from a satellite passive microwave radiometer
NASA Technical Reports Server (NTRS)
Rodgers, E. B.; Adler, R. F.
1979-01-01
Data from the Nimbus-5 Electrically Scanning Microwave Radiometer (ESMR-5) were used to calculate latent heat release and other rainfall parameters for over 70 satellite observations of 21 tropical cyclones in the tropical North Pacific Ocean. The results indicate that the ESMR-5 measurements can be useful in determining the rainfall characteristics of these storms and appear to be potentially useful in monitoring as well as predicting their intensity. The ESMR-5 derived total tropical cyclone rainfall estimates agree favorably with previous estimates for both the disturbance and typhoon stages. The mean typhoon rainfall rate (1.9 mm h(-1)) is approximately twice that of disturbances (1.1 mm h(-1)).
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
Jafaruddin; Indratno, Sapto W; Nuraini, Nuning; Supriatna, Asep K; Soewono, Edy
2015-01-01
Estimating the basic reproductive ratio ℛ 0 of dengue fever has continued to be an ever-increasing challenge among epidemiologists. In this paper we propose two different constructions to estimate ℛ 0 which is derived from a dynamical system of host-vector dengue transmission model. The construction is based on the original assumption that in the early states of an epidemic the infected human compartment increases exponentially at the same rate as the infected mosquito compartment (previous work). In the first proposed construction, we modify previous works by assuming that the rates of infection for mosquito and human compartments might be different. In the second construction, we add an improvement by including more realistic conditions in which the dynamics of an infected human compartments are intervened by the dynamics of an infected mosquito compartment, and vice versa. We apply our construction to the real dengue epidemic data from SB Hospital, Bandung, Indonesia, during the period of outbreak Nov. 25, 2008-Dec. 2012. We also propose two scenarios to determine the take-off rate of infection at the beginning of a dengue epidemic for construction of the estimates of ℛ 0: scenario I from equation of new cases of dengue with respect to time (daily) and scenario II from equation of new cases of dengue with respect to cumulative number of new cases of dengue. The results show that our first construction of ℛ 0 accommodates the take-off rate differences between mosquitoes and humans. Our second construction of the ℛ 0 estimation takes into account the presence of infective mosquitoes in the early growth rate of infective humans and vice versa. We conclude that the second approach is more realistic, compared with our first approach and the previous work.
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
NASA Astrophysics Data System (ADS)
Nabetani, Yu; Takamura, Hazuki; Uchikoshi, Akino; Hassan, Syed Zahid; Shimada, Tetsuya; Takagi, Shinsuke; Tachibana, Hiroshi; Masui, Dai; Tong, Zhiwei; Inoue, Haruo
2016-06-01
Photo-responsive nanoscrolls can be successfully fabricated by mixing a polyfluoroalkyl azobenzene derivative and a niobate nanosheet, which is exfoliated from potassium hexaniobate. In this study, we have found that the photo-responsive nanoscroll shows a morphological motion of winding and unwinding, which is basically due to the nanosheet sliding within the nanoscroll, by efficient photo-isomerization reactions of the intercalated azobenzene in addition to the interlayer distance change of the nanoscrolls. The relative nanosheet sliding of the nanoscroll is estimated to be ca. 280 nm from the AFM morphology analysis. The distance of the sliding motion is over 20 times that of the averaged nanosheet sliding in the azobenzene/niobate hybrid film reported previously. Photo-responsive nanoscrolls can be expected to be novel photo-activated actuators and artificial muscle model materials.Photo-responsive nanoscrolls can be successfully fabricated by mixing a polyfluoroalkyl azobenzene derivative and a niobate nanosheet, which is exfoliated from potassium hexaniobate. In this study, we have found that the photo-responsive nanoscroll shows a morphological motion of winding and unwinding, which is basically due to the nanosheet sliding within the nanoscroll, by efficient photo-isomerization reactions of the intercalated azobenzene in addition to the interlayer distance change of the nanoscrolls. The relative nanosheet sliding of the nanoscroll is estimated to be ca. 280 nm from the AFM morphology analysis. The distance of the sliding motion is over 20 times that of the averaged nanosheet sliding in the azobenzene/niobate hybrid film reported previously. Photo-responsive nanoscrolls can be expected to be novel photo-activated actuators and artificial muscle model materials. Electronic supplementary information (ESI) available: Fig. S1. Photo-isomerization reaction of nanoscrolls. See DOI: 10.1039/c6nr02177h
Jorgensen, David P.; Hanshaw, Maiana N.; Schmidt, Kevin M.; Laber, Jayme L; Staley, Dennis M.; Kean, Jason W.; Restrepo, Pedro J.
2011-01-01
A portable truck-mounted C-band Doppler weather radar was deployed to observe rainfall over the Station Fire burn area near Los Angeles, California, during the winter of 2009/10 to assist with debris-flow warning decisions. The deployments were a component of a joint NOAA–U.S. Geological Survey (USGS) research effort to improve definition of the rainfall conditions that trigger debris flows from steep topography within recent wildfire burn areas. A procedure was implemented to blend various dual-polarized estimators of precipitation (for radar observations taken below the freezing level) using threshold values for differential reflectivity and specific differential phase shift that improves the accuracy of the rainfall estimates over a specific burn area sited with terrestrial tipping-bucket rain gauges. The portable radar outperformed local Weather Surveillance Radar-1988 Doppler (WSR-88D) National Weather Service network radars in detecting rainfall capable of initiating post-fire runoff-generated debris flows. The network radars underestimated hourly precipitation totals by about 50%. Consistent with intensity–duration threshold curves determined from past debris-flow events in burned areas in Southern California, the portable radar-derived rainfall rates exceeded the empirical thresholds over a wider range of storm durations with a higher spatial resolution than local National Weather Service operational radars. Moreover, the truck-mounted C-band radar dual-polarimetric-derived estimates of rainfall intensity provided a better guide to the expected severity of debris-flow events, based on criteria derived from previous events using rain gauge data, than traditional radar-derived rainfall approaches using reflectivity–rainfall relationships for either the portable or operational network WSR-88D radars. Part of the reason for the improvement was due to siting the radar closer to the burn zone than the WSR-88Ds, but use of the dual-polarimetric variables improved the rainfall estimation by ~12% over the use of traditional Z–R relationships.
NASA Astrophysics Data System (ADS)
Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.
2012-12-01
Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
Burgette, Reed J.; Hanson, Austin; Scharer, Katherine M.; Midttun, Nikolas
2016-01-01
The Sierra Madre Fault is a reverse fault system along the southern flank of the San Gabriel Mountains near Los Angeles, California. This study focuses on the Central Sierra Madre Fault (CSMF) in an effort to provide numeric dating on surfaces with ages previously estimated from soil development alone. We have refined previous geomorphic mapping conducted in the western portion of the CSMF near Pasadena, CA, with the aid of new lidar data. This progress report focuses on our geochronology strategy employed in collecting samples and interpreting data to determine a robust suite of terrace surface ages. Sample sites for terrestrial cosmogenic nuclide and luminescence dating techniques were selected to be redundant and to be validated through relative geomorphic relationships between inset terrace levels. Additional sample sites were selected to evaluate the post-abandonment histories of terrace surfaces. We will combine lidar-derived displacement data with surface ages to estimate slip rates for the CSMF.
Ab initio calculation of infrared intensities for hydrogen peroxide
NASA Technical Reports Server (NTRS)
Rogers, J. D.; Hillman, J. J.
1982-01-01
Results of an ab initio SCF quantum mechanical study are used to derive estimates for the infrared intensities of the fundamental vibrations of hydrogen peroxide. Atomic polar tensors (APTs) were calculated on the basis of a 4-31G basis set, and used to derive absolute intensities for the vibrational transitions. Comparison of the APTs calculated for H2O2 with those previously obtained for H2O and CH3OH, and of the absolute intensities derived from the H2O2 APTs with those derived from APTs transferred from H2O and CH3OH, reveals the sets of values to differ by no more than a factor of two, supporting the validity of the theoretical calculation. Values of the infrared intensities obtained correspond to A1 = 14.5 km/mol, A2 = 0.91 km/mol, A3 = 0.058 km/mol, A4 = 123 km/mol, A5 = 46.2 km/mol, and A6 = 101 km/mol. Charge, charge flux and overlap contributions to the dipole moment derivatives are also computed.
Ab initio calculation of infrared intensities for hydrogen peroxide
NASA Astrophysics Data System (ADS)
Rogers, J. D.; Hillman, J. J.
1982-04-01
Results of an ab initio SCF quantum mechanical study are used to derive estimates for the infrared intensities of the fundamental vibrations of hydrogen peroxide. Atomic polar tensors (APTs) were calculated on the basis of a 4-31G basis set, and used to derive absolute intensities for the vibrational transitions. Comparison of the APTs calculated for H2O2 with those previously obtained for H2O and CH3OH, and of the absolute intensities derived from the H2O2 APTs with those derived from APTs transferred from H2O and CH3OH, reveals the sets of values to differ by no more than a factor of two, supporting the validity of the theoretical calculation. Values of the infrared intensities obtained correspond to A1 = 14.5 km/mol, A2 = 0.91 km/mol, A3 = 0.058 km/mol, A4 = 123 km/mol, A5 = 46.2 km/mol, and A6 = 101 km/mol. Charge, charge flux and overlap contributions to the dipole moment derivatives are also computed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nisbet, A.F.; Woodman, R.F.M.
A database of soil-to-plant transfer factors for radiocesium and radiostrontium has been compiled for arable crops from published and unpublished sources. The database is more extensive than previous compilations of data published by the International Union of Radioecologists, containing new information for Scandinavia and Greece in particular. It also contains ancillary data on important soil characteristics. The database is sub-divided into 28 soil-crop combinations, covering four soil types and seven crop groups. Statistical analyses showed that transfer factors for radiocesium could not generally be predicted as a function of climatic region, type of experiment, age of contamination, or silt characteristics.more » However, significant relationships accounting for more than 30% of the variability in transfer factor were identified between transfer factors for radiostrontium and soil pH/organic matter status for a few soil-crop combinations. Best estimate transfer factors for radiocesium and radiostrontium were calculated for 28 soil-crop combinations, based on their geometric means: only the edible parts were considered. To predict the likely value of future individual transfer factors, 95% confidence intervals were also derived. A comparison of best estimate transfer factors derived in this study with recommended values published by the International Union of Radioecologists in 1989 and 1992 was made for comparable soil-crop groupings. While there were no significant differences between the best estimate values derived in this study and the 1992 data, radiological assessments that still use 1989 data may be unnecessarily cautious.« less
Irons, Trevor P.; Hobza, Christopher M.; Steele, Gregory V.; Abraham, Jared D.; Cannia, James C.; Woodward, Duane D.
2012-01-01
Surface nuclear magnetic resonance, a noninvasive geophysical method, measures a signal directly related to the amount of water in the subsurface. This allows for low-cost quantitative estimates of hydraulic parameters. In practice, however, additional factors influence the signal, complicating interpretation. The U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, evaluated whether hydraulic parameters derived from surface nuclear magnetic resonance data could provide valuable input into groundwater models used for evaluating water-management practices. Two calibration sites in Dawson County, Nebraska, were chosen based on previous detailed hydrogeologic and geophysical investigations. At both sites, surface nuclear magnetic resonance data were collected, and derived parameters were compared with results from four constant-discharge aquifer tests previously conducted at those same sites. Additionally, borehole electromagnetic-induction flowmeter data were analyzed as a less-expensive surrogate for traditional aquifer tests. Building on recent work, a novel surface nuclear magnetic resonance modeling and inversion method was developed that incorporates electrical conductivity and effects due to magnetic-field inhomogeneities, both of which can have a substantial impact on the data. After comparing surface nuclear magnetic resonance inversions at the two calibration sites, the nuclear magnetic-resonance-derived parameters were compared with previously performed aquifer tests in the Central Platte Natural Resources District. This comparison served as a blind test for the developed method. The nuclear magnetic-resonance-derived aquifer parameters were in agreement with results of aquifer tests where the environmental noise allowed data collection and the aquifer test zones overlapped with the surface nuclear magnetic resonance testing. In some cases, the previously performed aquifer tests were not designed fully to characterize the aquifer, and the surface nuclear magnetic resonance was able to provide missing data. In favorable locations, surface nuclear magnetic resonance is able to provide valuable noninvasive information about aquifer parameters and should be a useful tool for groundwater managers in Nebraska.
NASA Astrophysics Data System (ADS)
Robertson, K. M.; Milliken, R. E.; Li, S.
2016-10-01
Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.
NASA Technical Reports Server (NTRS)
Ulsig, Laura; Nichol, Caroline J.; Huemmrich, Karl F.; Landis, David R.; Middleton, Elizabeth M.; Lyapustin, Alexei I.; Mammarella, Ivan; Levula, Janne; Porcar-Castell, Albert
2017-01-01
Long-term observations of vegetation phenology can be used to monitor the response of terrestrial ecosystems to climate change. Satellite remote sensing provides the most efficient means to observe phenological events through time series analysis of vegetation indices such as the Normalized Difference Vegetation Index (NDVI). This study investigates the potential of a Photochemical Reflectance Index (PRI), which has been linked to vegetation light use efficiency, to improve the accuracy of MODIS-based estimates of phenology in an evergreen conifer forest. Timings of the start and end of the growing season (SGS and EGS) were derived from a 13-year-long time series of PRI and NDVI based on a MAIAC (multi-angle implementation of atmospheric correction) processed MODIS dataset and standard MODIS NDVI product data. The derived dates were validated with phenology estimates from ground-based flux tower measurements of ecosystem productivity. Significant correlations were found between the MAIAC time series and ground-estimated SGS (R (sup 2) equals 0.36-0.8), which is remarkable since previous studies have found it difficult to observe inter-annual phenological variations in evergreen vegetation from satellite data. The considerably noisier NDVI product could not accurately predict SGS, and EGS could not be derived successfully from any of the time series. While the strongest relationship overall was found between SGS derived from the ground data and PRI, MAIAC NDVI exhibited high correlations with SGS more consistently (R (sup 2) is greater than 0.6 in all cases). The results suggest that PRI can serve as an effective indicator of spring seasonal transitions, however, additional work is necessary to confirm the relationships observed and to further explore the usefulness of MODIS PRI for detecting phenology.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Shannon, H. D.
2013-12-01
The USDA World Agricultural Outlook Board (WAOB) is responsible for monitoring weather and climate impacts on domestic and foreign crop development. One of WAOB's primary goals is to determine the net cumulative effect of weather and climate anomalies on final crop yields. To this end, a broad array of information is consulted, including maps, charts, and time series of recent weather, climate, and crop observations; numerical output from weather and crop models; and reports from the press, USDA attachés, and foreign governments. The resulting agricultural weather assessments are published in the Weekly Weather and Crop Bulletin, to keep farmers, policy makers, and commercial agricultural interests informed of weather and climate impacts on agriculture. Because both the amount and timing of precipitation significantly affect crop yields, WAOB has often, as part of its operational process, used historical time series of surface-based precipitation observations to visually identify growing seasons with similar (analog) weather patterns as, and help estimate crop yields for, the current growing season. As part of a larger effort to improve WAOB estimates by integrating NASA remote sensing observations and research results into WAOB's decision-making environment, a more rigorous, statistical method for identifying analog years was developed. This method, termed the analog index (AI), is based on the Nash-Sutcliffe model efficiency coefficient. The AI was computed for five study areas and six growing seasons of data analyzed (2003-2007 as potential analog years and 2008 as the target year). Previously reported results compared the performance of AI for time series derived from surface-based observations vs. satellite-retrieved precipitation data. Those results showed that, for all five areas, crop yield estimates derived from satellite-retrieved precipitation data are closer to measured yields than are estimates derived from surface-based precipitation observations. Subsequent work has compared the relative performance of AI for time series derived from satellite-retrieved surface soil moisture data and from root zone soil moisture derived from the assimilation of surface soil moisture data into a land surface model. These results, which also showed the potential benefits of satellite data for analog year analyses, will be presented.
Tan, Sarah; Makela, Susanna; Heller, Daliah; Konty, Kevin; Balter, Sharon; Zheng, Tian; Stark, James H
2018-06-01
Existing methods to estimate the prevalence of chronic hepatitis C (HCV) in New York City (NYC) are limited in scope and fail to assess hard-to-reach subpopulations with highest risk such as injecting drug users (IDUs). To address these limitations, we employ a Bayesian multi-parameter evidence synthesis model to systematically combine multiple sources of data, account for bias in certain data sources, and provide unbiased HCV prevalence estimates with associated uncertainty. Our approach improves on previous estimates by explicitly accounting for injecting drug use and including data from high-risk subpopulations such as the incarcerated, and is more inclusive, utilizing ten NYC data sources. In addition, we derive two new equations to allow age at first injecting drug use data for former and current IDUs to be incorporated into the Bayesian evidence synthesis, a first for this type of model. Our estimated overall HCV prevalence as of 2012 among NYC adults aged 20-59 years is 2.78% (95% CI 2.61-2.94%), which represents between 124,900 and 140,000 chronic HCV cases. These estimates suggest that HCV prevalence in NYC is higher than previously indicated from household surveys (2.2%) and the surveillance system (2.37%), and that HCV transmission is increasing among young injecting adults in NYC. An ancillary benefit from our results is an estimate of current IDUs aged 20-59 in NYC: 0.58% or 27,600 individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
Coelho, A V C; Moura, R R; Cavalcanti, C A J; Guimarães, R L; Sandrin-Garcia, P; Crovella, S; Brandão, L A C
2015-03-31
Genetic association studies determine how genes influence traits. However, non-detected population substructure may bias the analysis, resulting in spurious results. One method to detect substructure is to genotype ancestry informative markers (AIMs) besides the candidate variants, quantifying how much ancestral populations contribute to the samples' genetic background. The present study aimed to use a minimum quantity of markers, while retaining full potential to estimate ancestries. We tested the feasibility of a subset of the 12 most informative markers from a previously established study to estimate influence from three ancestral populations: European, African and Amerindian. The results showed that in a sample with a diverse ethnicity (N = 822) derived from 1000 Genomes database, the 12 AIMs had the same capacity to estimate ancestries when compared to the original set of 128 AIMs, since estimates from the two panels were closely correlated. Thus, these 12 SNPs were used to estimate ancestry in a new sample (N = 192) from an admixed population in Recife, Northeast Brazil. The ancestry estimates from Recife subjects were in accordance with previous studies, showing that Northeastern Brazilian populations show great influence from European ancestry (59.7%), followed by African (23.0%) and Amerindian (17.3%) ancestries. Ethnicity self-classification according to skin-color was confirmed to be a poor indicator of population substructure in Brazilians, since ancestry estimates overlapped between classifications. Thus, our streamlined panel of 12 markers may substitute panels with more markers, while retaining the capacity to control for population substructure and admixture, thereby reducing sample processing time.
Chiu, Chun-Huo; Wang, Yi-Ting; Walther, Bruno A; Chao, Anne
2014-09-01
It is difficult to accurately estimate species richness if there are many almost undetectable species in a hyper-diverse community. Practically, an accurate lower bound for species richness is preferable to an inaccurate point estimator. The traditional nonparametric lower bound developed by Chao (1984, Scandinavian Journal of Statistics 11, 265-270) for individual-based abundance data uses only the information on the rarest species (the numbers of singletons and doubletons) to estimate the number of undetected species in samples. Applying a modified Good-Turing frequency formula, we derive an approximate formula for the first-order bias of this traditional lower bound. The approximate bias is estimated by using additional information (namely, the numbers of tripletons and quadrupletons). This approximate bias can be corrected, and an improved lower bound is thus obtained. The proposed lower bound is nonparametric in the sense that it is universally valid for any species abundance distribution. A similar type of improved lower bound can be derived for incidence data. We test our proposed lower bounds on simulated data sets generated from various species abundance models. Simulation results show that the proposed lower bounds always reduce bias over the traditional lower bounds and improve accuracy (as measured by mean squared error) when the heterogeneity of species abundances is relatively high. We also apply the proposed new lower bounds to real data for illustration and for comparisons with previously developed estimators. © 2014, The International Biometric Society.
Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara
2012-08-01
Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.
Estimating Aeroheating of a 3D Body Using a 2D Flow Solver
NASA Technical Reports Server (NTRS)
Scott, Carl D.; Brykina, Irina G.
2005-01-01
A method for rapidly estimating the aeroheating, shear stress, and other properties of hypersonic flow about a three-dimensional (3D) blunt body has been devised. First, the geometry of the body is specified in Cartesian coordinates. The surface of the body is then described by its derivatives, coordinates, and principal curvatures. Next, previously relatively simple equations are used to find, for each desired combination of angle of attack and meridional angle, a scaling factor and the shape of an equivalent axisymmetric body. These factors and equivalent shapes are entered as inputs into a previously developed computer program that solves the two-dimensional (2D) equations of flow in a non-equilibrium viscous shock layer (VSL) about an axisymmetric body. The coordinates in the output of the VSL code are transformed back to the Cartesian coordinates of the 3D body, so that computed flow quantities can be registered with locations in the 3D flow field of interest. In tests in which the 3D bodies were elliptic paraboloids, the estimates obtained by use of this method were found to agree well with solutions of 3D, finite-rate-chemistry, thin-VSL equations for a catalytic body.
Semimajor Axis Estimation Strategies
NASA Technical Reports Server (NTRS)
How, Jonathan P.; Alfriend, Kyle T.; Breger, Louis; Mitchell, Megan
2004-01-01
This paper extends previous analysis on the impact of sensing noise for the navigation and control aspects of formation flying spacecraft. We analyze the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters, with a particular focus on the filter correlation coefficient. This work was motivated by previous publications which suggested that a "good" navigation filter would have a strong correlation (i.e., coefficient near -1) to reduce the semimajor axis (SMA) error, and therefore, the overall fuel use. However, practical experience with CDGPS-based filters has shown this strong correlation seldom occurs (typical correlations approx. -0.1), even when the estimation accuracies are very good. We derive an analytic estimate of the filter correlation coefficient and demonstrate that, for the process and sensor noises levels expected with CDGPS, the expected value will be very low. It is also demonstrated that this correlation can be improved by increasing the time step of the discrete Kalman filter, but since the balance condition is not satisfied, the SMA error also increases. These observations are verified with several linear simulations. The combination of these simulations and analysis provide new insights on the crucial role of the process noise in determining the semimajor axis knowledge.
High-precision Location, Yield and Tectonic Release of North Korea's 3 September 2017 Nuclear Test
NASA Astrophysics Data System (ADS)
Yao, J.; Tian, D.; Wen, L.
2017-12-01
On 3 September 2017, the Democratic People's Republic of Korea (North Korea) announced that it had successfully conducted a thermonuclear (hydrogen bomb) test. The nuclear test was collaborated by reports of a seismic event with a magnitude ranging from 6.1 to 6.3 by many governmental and international agencies, although its thermonuclear nature remains to be confirmed. In this study, by combining modern methods of high-precision relocation and satellite imagery, and using the knowledge of a previous test (North Korea's 9 September 2016 test) as reference, we determine the location and yield of North Korea's 2017 test. The location of the 2017 test is determined by deriving relative location between North Korea's 2017 and 2016 nuclear tests and using the previously determined location of the 2016 nuclear test by our group, while its yield is estimated based on the relative amplitude ratios of the Lg waves recorded for both events, the previously determined Lg-magnitude of the 2016 test and burial depth inferred from satellite imagery. The 2017 nuclear test is determined to be located at (41° 17' 53.52″ N, 129° 4' 27.12″ E) with a geographic precision of 100 m, and its yield is estimated to be 108±48 kt. The 2017 nuclear test and its four previous tests since 2009 are located several hundred meters apart, beneath the same mountain Mantap. We also evaluate the tectonic release by the 2017 nuclear test and discuss its implications for the yield estimation of the test.
NASA Astrophysics Data System (ADS)
Rigden, Angela J.; Salvucci, Guido D.
2015-04-01
A novel method of estimating evapotranspiration (ET), referred to as the ETRHEQ method, is further developed, validated, and applied across the U.S. from 1961 to 2010. The ETRHEQ method estimates the surface conductance to water vapor transport, which is the key rate-limiting parameter of typical ET models, by choosing the surface conductance that minimizes the vertical variance of the calculated relative humidity profile averaged over the day. The ETRHEQ method, which was previously tested at five AmeriFlux sites, is modified for use at common weather stations and further validated at 20 AmeriFlux sites that span a wide range of climates and limiting factors. Averaged across all sites, the daily latent heat flux RMSE is ˜26 W·m-2 (or 15%). The method is applied across the U.S. at 305 weather stations and spatially interpolated using ANUSPLIN software. Gridded annual mean ETRHEQ ET estimates are compared with four data sets, including water balance-derived ET, machine-learning ET estimates based on FLUXNET data, North American Land Data Assimilation System project phase 2 ET, and a benchmark product that integrates 14 global ET data sets, with RMSEs ranging from 8.7 to 12.5 cm·yr-1. The ETRHEQ method relies only on data measured at weather stations, an estimate of vegetation height derived from land cover maps, and an estimate of soil thermal inertia. These data requirements allow it to have greater spatial coverage than direct measurements, greater historical coverage than satellite methods, significantly less parameter specification than most land surface models, and no requirement for calibration.
Bubley, W J; Kneebone, J; Sulikowski, J A; Tsang, P C W
2012-04-01
Male and female spiny dogfish Squalus acanthias were collected in the western North Atlantic Ocean in the Gulf of Maine between July 2006 and June 2009. Squalus acanthias ranged from 25 to 102 cm stretch total length and were caught during all months of the year except January. Age estimates derived from banding patterns visible in both the vertebrae and second dorsal-fin spines were compared. Vertebral growth increments were visualized using a modified histological staining technique, which was verified as appropriate for obtaining age estimates. Marginal increment analysis of vertebrae verified the increment periodicity, suggesting annual band deposition. Based on increased precision and accuracy of age estimates, as well as more biologically realistic parameters generated in growth models, the current study found that vertebrae provided a more reliable and accurate means of estimating age in S. acanthias than the second dorsal-fin spine. Age estimates obtained from vertebrae ranged from <1 year-old to 17 years for male and 24 years for female S. acanthias. The two-parameter von Bertalanffy growth model fit to vertebrae-derived age estimates produced parameters of L∞ = 94·23 cm and k = 0·11 for males and L∞ = 100·76 cm and k = 0·12 for females. While these growth parameters differed from those previously reported for S. acanthias in the western North Atlantic Ocean, the causes of such differences were beyond the scope of the current study and remain to be determined. © 2011 The Authors. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
Tree stability under wind: simulating uprooting with root breakage using a finite element method.
Yang, Ming; Défossez, Pauline; Danjon, Frédéric; Fourcaud, Thierry
2014-09-01
Windstorms are the major natural hazard affecting European forests, causing tree damage and timber losses. Modelling tree anchorage mechanisms has progressed with advances in plant architectural modelling, but it is still limited in terms of estimation of anchorage strength. This paper aims to provide a new model for root anchorage, including the successive breakage of roots during uprooting. The model was based on the finite element method. The breakage of individual roots was taken into account using a failure law derived from previous work carried out on fibre metal laminates. Soil mechanical plasticity was considered using the Mohr-Coulomb failure criterion. The mechanical model for roots was implemented in the numerical code ABAQUS using beam elements embedded in a soil block meshed with 3-D solid elements. The model was tested by simulating tree-pulling experiments previously carried out on a tree of Pinus pinaster (maritime pine). Soil mechanical parameters were obtained from laboratory tests. Root system architecture was digitized and imported into ABAQUS while root material properties were estimated from the literature. Numerical simulations of tree-pulling tests exhibited realistic successive root breakages during uprooting, which could be seen in the resulting response curves. Broken roots could be visually located within the root system at any stage of the simulations. The model allowed estimation of anchorage strength in terms of the critical turning moment and accumulated energy, which were in good agreement with in situ measurements. This study provides the first model of tree anchorage strength for P. pinaster derived from the mechanical strength of individual roots. The generic nature of the model permits its further application to other tree species and soil conditions.
Tree stability under wind: simulating uprooting with root breakage using a finite element method
Yang, Ming; Défossez, Pauline; Danjon, Frédéric; Fourcaud, Thierry
2014-01-01
Background and Aims Windstorms are the major natural hazard affecting European forests, causing tree damage and timber losses. Modelling tree anchorage mechanisms has progressed with advances in plant architectural modelling, but it is still limited in terms of estimation of anchorage strength. This paper aims to provide a new model for root anchorage, including the successive breakage of roots during uprooting. Methods The model was based on the finite element method. The breakage of individual roots was taken into account using a failure law derived from previous work carried out on fibre metal laminates. Soil mechanical plasticity was considered using the Mohr–Coulomb failure criterion. The mechanical model for roots was implemented in the numerical code ABAQUS using beam elements embedded in a soil block meshed with 3-D solid elements. The model was tested by simulating tree-pulling experiments previously carried out on a tree of Pinus pinaster (maritime pine). Soil mechanical parameters were obtained from laboratory tests. Root system architecture was digitized and imported into ABAQUS while root material properties were estimated from the literature. Key Results Numerical simulations of tree-pulling tests exhibited realistic successive root breakages during uprooting, which could be seen in the resulting response curves. Broken roots could be visually located within the root system at any stage of the simulations. The model allowed estimation of anchorage strength in terms of the critical turning moment and accumulated energy, which were in good agreement with in situ measurements. Conclusions This study provides the first model of tree anchorage strength for P. pinaster derived from the mechanical strength of individual roots. The generic nature of the model permits its further application to other tree species and soil conditions. PMID:25006178
Continuous non-contact vital sign monitoring in neonatal intensive care unit
Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel
2014-01-01
Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal. PMID:26609384
Continuous non-contact vital sign monitoring in neonatal intensive care unit.
Villarroel, Mauricio; Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel
2014-09-01
Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.
Spectrophotometric observations of symbiotic stars and related objects
NASA Technical Reports Server (NTRS)
Blair, W. P.; Feibelman, W. A.; Michalitsianos, A. G.; Stencel, R. E.
1983-01-01
Calibrated optical spectrophotometric observations of 16 symbiotic and symbiotic-like objects are presented. The objects observed include Z And, T CrB, CH Cyg, CI Cyg, V1016 Cyg, V1329 Cyg, AG Dra, YY Her, RS Oph, XX Oph, AG Peg, AX Per, CL Sco, HM Sge, AS 289, and M1-2. Integrated emission-line intensities are tabulated for comparison with ultraviolet and infrared data, as well as with previous optical studies. The reddening to each of the objects is derived by assuming that Balmer lines are emitted in their case B recombination ratios. However, the values so derived are often systematically higher than reddening estimates from the ultraviolet 2200 A feature. Comparisons with the available data from other wavelength ranges are noted.
Abundant carbon in the mantle beneath Hawai`i
Anderson, Kyle R.; Poland, Michael
2017-01-01
Estimates of carbon concentrations in Earth’s mantle vary over more than an order of magnitude, hindering our ability to understand mantle structure and mineralogy, partial melting, and the carbon cycle. CO2 concentrations in mantle-derived magmas supplying hotspot ocean island volcanoes yield our most direct constraints on mantle carbon, but are extensively modified by degassing during ascent. Here we show that undegassed magmatic and mantle carbon concentrations may be estimated in a Bayesian framework using diverse geologic information at an ocean island volcano. Our CO2 concentration estimates do not rely upon complex degassing models, geochemical tracer elements, assumed magma supply rates, or rare undegassed rock samples. Rather, we couple volcanic CO2 emission rates with probabilistic magma supply rates, which are obtained indirectly from magma storage and eruption rates. We estimate that the CO2content of mantle-derived magma supplying Hawai‘i’s active volcanoes is 0.97−0.19+0.25 wt%—roughly 40% higher than previously believed—and is supplied from a mantle source region with a carbon concentration of 263−62+81 ppm. Our results suggest that mantle plumes and ocean island basalts are carbon-rich. Our data also shed light on helium isotope abundances, CO2/Nb ratios, and may imply higher CO2 emission rates from ocean island volcanoes.
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
A Fast Fourier transform stochastic analysis of the contaminant transport problem
Deng, F.W.; Cushman, J.H.; Delleur, J.W.
1993-01-01
A three-dimensional stochastic analysis of the contaminant transport problem is developed in the spirit of Naff (1990). The new derivation is more general and simpler than previous analysis. The fast Fourier transformation is used extensively to obtain numerical estimates of the mean concentration and various spatial moments. Data from both the Borden and Cape Cod experiments are used to test the methodology. Results are comparable to results obtained by other methods, and to the experiments themselves.
Quantum key distribution with finite resources: Secret key rates via Renyi entropies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abruzzo, Silvestre; Kampermann, Hermann; Mertz, Markus
A realistic quantum key distribution (QKD) protocol necessarily deals with finite resources, such as the number of signals exchanged by the two parties. We derive a bound on the secret key rate which is expressed as an optimization problem over Renyi entropies. Under the assumption of collective attacks by an eavesdropper, a computable estimate of our bound for the six-state protocol is provided. This bound leads to improved key rates in comparison to previous results.
Bounds on Time Reversal Violation From Polarized Neutron Capture With Unpolarized Targets.
Davis, E D; Gould, C R; Mitchell, G E; Sharapov, E I
2005-01-01
We have analyzed constraints on parity-odd time-reversal noninvariant interactions derived from measurements of the energy dependence of parity-violating polarized neutron capture on unpolarized targets. As previous authors found, a perturbation in energy dependence due to a parity (P)-odd time (T)-odd interaction is present. However, the perturbation competes with T-even terms which can obscure the T-odd signature. We estimate the magnitudes of these competing terms and suggest strategies for a practicable experiment.
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
NASA Astrophysics Data System (ADS)
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Prenatal air pollution exposure and ultrasound measures of fetal growth in Los Angeles, California.
Ritz, Beate; Qiu, Jiaheng; Lee, Pei-Chen; Lurmann, Fred; Penfold, Bryan; Erin Weiss, Robert; McConnell, Rob; Arora, Chander; Hobel, Calvin; Wilhelm, Michelle
2014-04-01
Few previous studies examined the impact of prenatal air pollution exposures on fetal development based on ultrasound measures during pregnancy. In a prospective birth cohort of more than 500 women followed during 1993-1996 in Los Angeles, California, we examined how air pollution impacts fetal growth during pregnancy. Exposure to traffic related air pollution was estimated using CALINE4 air dispersion modeling for nitrogen oxides (NOx) and a land use regression (LUR) model for nitrogen monoxide (NO), nitrogen dioxide (NO2) and NOx. Exposures to carbon monoxide (CO), NO2, ozone (O3) and particles <10μm in aerodynamic diameter (PM10) were estimated using government monitoring data. We employed a linear mixed effects model to estimate changes in fetal size at approximately 19, 29 and 37 weeks gestation based on ultrasound. Exposure to traffic-derived air pollution during 29 to 37 weeks was negatively associated with biparietal diameter at 37 weeks gestation. For each interquartile range (IQR) increase in LUR-based estimates of NO, NO2 and NOx, or freeway CALINE4 NOx we estimated a reduction in biparietal diameter of 0.2-0.3mm. For women residing within 5km of a monitoring station, we estimated biparietal diameter reductions of 0.9-1.0mm per IQR increase in CO and NO2. Effect estimates were robust to adjustment for a number of potential confounders. We did not observe consistent patterns for other growth endpoints we examined. Prenatal exposure to traffic-derived pollution was negatively associated with fetal head size measured as biparietal diameter in late pregnancy. Copyright © 2014 Elsevier Inc. All rights reserved.
Prenatal Air Pollution Exposure and Ultrasound Measures of Fetal Growth in Los Angeles, California
Ritz, Beate; Qiu, Jiaheng; Lee, Pei-Chen; Lurmann, Fred; Penfold, Bryan; Weiss, Robert Erin; McConnell, Rob; Arora, Chander; Hobel, Calvin; Wilhelm, Michelle
2014-01-01
Background Few previous studies examined the impact of prenatal air pollution exposures on fetal development based on ultrasound measures during pregnancy. Methods In a prospective birth cohort of more than 500 women followed during 1993-1996 in Los Angeles, California, we examined how air pollution impacts fetal growth during pregnancy. Exposure to traffic related air pollution was estimated using CALINE4 air dispersion modeling for nitrogen oxides (NOx) and a land use regression (LUR) model for nitrogen monoxide (NO), nitrogen dioxide (NO2) and NOx. Exposures to carbon monoxide (CO), NO2, ozone (O3) and particles <10 μm in aerodynamic diameter (PM10) were estimated using government monitoring data. We employed a linear mixed effects model to estimate changes in fetal size at approximately 19, 29 and 37 weeks gestation based on ultrasound. Results Exposure to traffic-derived air pollution during 29 to 37 weeks was negatively associated with biparietal diameter at 37 weeks gestation. For each interquartile range (IQR) increase in LUR-based estimates of NO, NO2 and NOx, or freeway CALINE4 NOx we estimated a reduction in biparietal diameter of 0.2-0.3 mm. For women residing within 5 km of a monitoring station, we estimated biparietal diameter reductions of 0.9-1.0 mm per IQR increase in CO and NO2. Effect estimates were robust to adjustment for a number of potential confounders. We did not observe consistent patterns for other growth endpoints we examined. Conclusions Prenatal exposure to traffic-derived pollution was negatively associated with fetal head size measured as biparietal diameter in late pregnancy. PMID:24517884
NASA Astrophysics Data System (ADS)
Matsuoka, A.; Hooker, S. B.; Bricaud, A.; Gentili, B.; Babin, M.
2013-02-01
A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM), has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM) was developed for southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows the separation of colored detrital matter (CDM) into CDOM and non-algal particles (NAP) through the determination of NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, which were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and coastal waters, respectively. A previous paper (Matsuoka et al., 2012) showed that dissolved organic carbon (DOC) concentrations were tightly correlated with CDOM absorption in our study area (r2 = 0.97). By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.
Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations
NASA Astrophysics Data System (ADS)
Loseille, A.; Dervieux, A.; Alauzet, F.
2010-04-01
This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.
Kennedy, Jeffrey R.; Paretti, Nicholas V.
2014-01-01
Flooding in urban areas routinely causes severe damage to property and often results in loss of life. To investigate the effect of urbanization on the magnitude and frequency of flood peaks, a flood frequency analysis was carried out using data from urbanized streamgaging stations in Phoenix and Tucson, Arizona. Flood peaks at each station were predicted using the log-Pearson Type III distribution, fitted using the expected moments algorithm and the multiple Grubbs-Beck low outlier test. The station estimates were then compared to flood peaks estimated by rural-regression equations for Arizona, and to flood peaks adjusted for urbanization using a previously developed procedure for adjusting U.S. Geological Survey rural regression peak discharges in an urban setting. Only smaller, more common flood peaks at the 50-, 20-, 10-, and 4-percent annual exceedance probabilities (AEPs) demonstrate any increase in magnitude as a result of urbanization; the 1-, 0.5-, and 0.2-percent AEP flood estimates are predicted without bias by the rural-regression equations. Percent imperviousness was determined not to account for the difference in estimated flood peaks between stations, either when adjusting the rural-regression equations or when deriving urban-regression equations to predict flood peaks directly from basin characteristics. Comparison with urban adjustment equations indicates that flood peaks are systematically overestimated if the rural-regression-estimated flood peaks are adjusted upward to account for urbanization. At nearly every streamgaging station in the analysis, adjusted rural-regression estimates were greater than the estimates derived using station data. One likely reason for the lack of increase in flood peaks with urbanization is the presence of significant stormwater retention and detention structures within the watershed used in the study.
Kumar, Akhil; Srivastava, Gaurava; Negi, Arvind S; Sharma, Ashok
2018-01-19
BACE-1 and GSK-3β both are potential therapeutic drug targets for Alzheimer's disease. Recently, both these targets received attention for designing dual inhibitors. Till now only two scaffolds (triazinone and curcumin) derivatives have been reported as BACE-1 and GSK-3β dual inhibitors. In our previous work, we have reported first in class dual inhibitor for BACE-1 and GSK-3β. In this study, we have explored other naphthofuran derivatives for their potential to inhibit BACE-1 and GSK-3β through docking, molecular dynamics, binding energy (MM-PBSA). These computational methods were performed to estimate the binding affinity of naphthofuran derivatives towards the BACE-1 and GSK-3β. In the docking results, two derivatives (NS7 and NS9) showed better binding affinity as compared to previously reported inhibitors. Hydrogen bond occupancy of NS7 and NS9 generated from MD trajectories showed good interaction with the flap residues Gln73, Thr72 of BACE-1 and Arg141, Thr138 residues of GSK-3β. MM-PBSA and energy decomposition per residue revealed different components of binding energy and relative importance of amino acid involved in binding. The results showed that the binding of inhibitors was majorly governed by the hydrophobic interactions and suggesting that hydrophobic interactions might be the key to design dual inhibitors for BACE1-1 and GSK-3β. Distance between important pair of amino acid residues indicated that BACE-1 and GSK-3β adopt closed conformation and become inactive after ligand binding. The results suggested that naphthofuran derivatives might act as dual inhibitor against BACE-1 and GSK-3β.
Booth, D.B.
1986-01-01
An estimate of the sliding velocity and basal meltwater discharge of the Puget lobe of the Cordilleran ice sheet can be calculated from its reconstructed extent, altitude, and mass balance. Lobe dimensions and surface altitudes are inferred from ice limits and flow-direction indicators. Net annual mass balance and total ablation are calculated from relations empirically derived from modern maritime glaciers. An equilibrium-line altitude between 1200 and 1250 m is calculated for the maximum glacial advance (ca. 15,000 yr B.P.) during the Vashon Stade of the Fraser Glaciation. This estimate is in accord with geologic data and is insensitive to plausible variability in the parameters used in the reconstruction. Resultant sliding velocities are as much as 650 m/a at the equilibrium line, decreasing both up- and downglacier. Such velocities for an ice sheet of this size are consistent with nonsurging behavior. Average meltwater discharge increases monotonically downglacier to 3000 m3/sec at the terminus and is of a comparable magnitude to ice discharge over much of the glacier's ablation area. Palcoclimatic inferences derived from this reconstruction are consistent with previous, independently derived studies of late Pleistocene temperature and precipitation in the Pacific Northwest. ?? 1986.
A fresh look into the interacting dark matter scenario
NASA Astrophysics Data System (ADS)
Escudero, Miguel; Lopez-Honorez, Laura; Mena, Olga; Palomares-Ruiz, Sergio; Villanueva-Domingo, Pablo
2018-06-01
The elastic scattering between dark matter particles and radiation represents an attractive possibility to solve a number of discrepancies between observations and standard cold dark matter predictions, as the induced collisional damping would imply a suppression of small-scale structures. We consider this scenario and confront it with measurements of the ionization history of the Universe at several redshifts and with recent estimates of the counts of Milky Way satellite galaxies. We derive a conservative upper bound on the dark matter-photon elastic scattering cross section of σγ DM < 8 × 10‑10 σT (mDM/GeV) at 95% CL, about one order of magnitude tighter than previous constraints from satellite number counts. Due to the strong degeneracies with astrophysical parameters, the bound on the dark matter-photon scattering cross section derived here is driven by the estimate of the number of Milky Way satellite galaxies. Finally, we also argue that future 21 cm probes could help in disentangling among possible non-cold dark matter candidates, such as interacting and warm dark matter scenarios. Let us emphasize that bounds of similar magnitude to the ones obtained here could be also derived for models with dark matter-neutrino interactions and would be as constraining as the tightest limits on such scenarios.
Deriving stellar inclination of slow rotators using stellar activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumusque, X., E-mail: xdumusque@cfa.harvard.edu
2014-12-01
Stellar inclination is an important parameter for many astrophysical studies. Although different techniques allow us to estimate stellar inclination for fast rotators, it becomes much more difficult when stars are rotating slower than ∼2-2.5 km s{sup –1}. By using the new activity simulation SOAP 2.0 which can reproduce the photometric and spectroscopic variations induced by stellar activity, we are able to fit observations of solar-type stars and derive their inclination. For HD 189733, we estimate the stellar inclination to be i=84{sub −20}{sup +6} deg, which implies a star-planet obliquity of ψ=4{sub −4}{sup +18} considering previous measurements of the spin-orbit angle.more » For α Cen B, we derive an inclination of i=45{sub −19}{sup +9}, which implies that the rotational spin of the star is not aligned with the orbital spin of the α Cen binary system. In addition, assuming that α Cen Bb is aligned with its host star, no transit would occur. The inclination of α Cen B can be measured using 40 radial-velocity measurements, which is remarkable given that the projected rotational velocity of the star is smaller than 1.15 km s{sup –1}.« less
Support for viral persistence in bats from age-specific serology and models of maternal immunity.
Peel, Alison J; Baker, Kate S; Hayman, David T S; Broder, Christopher C; Cunningham, Andrew A; Fooks, Anthony R; Garnier, Romain; Wood, James L N; Restif, Olivier
2018-03-01
Spatiotemporally-localised prediction of virus emergence from wildlife requires focused studies on the ecology and immunology of reservoir hosts in their native habitat. Reliable predictions from mathematical models remain difficult in most systems due to a dearth of appropriate empirical data. Our goal was to study the circulation and immune dynamics of zoonotic viruses in bat populations and investigate the effects of maternally-derived and acquired immunity on viral persistence. Using rare age-specific serological data from wild-caught Eidolon helvum fruit bats as a case study, we estimated viral transmission parameters for a stochastic infection model. We estimated mean durations of around 6 months for maternally-derived immunity to Lagos bat virus and African henipavirus, whereas acquired immunity was long-lasting (Lagos bat virus: mean 12 years, henipavirus: mean 4 years). In the presence of a seasonal birth pulse, the effect of maternally-derived immunity on virus persistence within modelled bat populations was highly dependent on transmission characteristics. To explain previous reports of viral persistence within small natural and captive E. helvum populations, we hypothesise that some bats must experience prolonged infectious periods or within-host latency. By further elucidating plausible mechanisms of virus persistence in bat populations, we contribute to guidance of future field studies.
NASA Technical Reports Server (NTRS)
Yu, Hongbin; Chin, Mian; Remer, Lorraine A.; Kleidman, Richard G.; Bellouin, Nicolas; Bian, Huisheng; Diehl, Thomas
2009-01-01
In this study, we examine seasonal and geographical variability of marine aerosol fine-mode fraction (f(sub m)) and its impacts on deriving the anthropogenic component of aerosol optical depth (tau(sub a)) and direct radiative forcing from multispectral satellite measurements. A proxy of f(sub m), empirically derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5 data, shows large seasonal and geographical variations that are consistent with the Goddard Chemistry Aerosol Radiation Transport (GOCART) and Global Modeling Initiative (GMI) model simulations. The so-derived seasonally and spatially varying f(sub m) is then implemented into a method of estimating tau(sub a) and direct radiative forcing from the MODIS measurements. It is found that the use of a constant value for fm as in previous studies would have overestimated Ta by about 20% over global ocean, with the overestimation up to 45% in some regions and seasons. The 7-year (2001-2007) global ocean average tau(sub a) is 0.035, with yearly average ranging from 0.031 to 0.039. Future improvement in measurements is needed to better separate anthropogenic aerosol from natural ones and to narrow down the wide range of aerosol direct radiative forcing.
Characterizing the Alpine Fault Strike Slip System Using a Novel Method for Analyzing GPS Data
NASA Astrophysics Data System (ADS)
Haines, A. J.; Dimitrova, L. L.; Wallace, L. M.; Williams, C. A.
2013-12-01
Plate motion across the South Island is dominated by right-lateral strike-slip (38-39 mm/yr total in the direction parallel to the Alpine Fault), with a small convergent component (8-10 mm/yr). The Alpine Fault is the most active fault in the region taking up 27×5 mm/yr in right-lateral strike-slip and ~10 mm/yr in dip-slip. It fails in large >=7 Mw earthquakes with recurrence time of 200-400 years and last ruptured around 1717. A significant component of the plate motion budget must occur on faults other than the Alpine Fault, but this is not fully accounted for in catalogues of known active faults. In the central part of the South Island, low slip rate active faults are not well-expressed due to the rapid erosion of the Southern Alps and deposition of these sediments onto the Canterbury plains; the devastating 2010 Darfield earthquake sequence occurred on such previously unknown faults. We apply a novel inversion technique (Dimitrova et al. 2012, 2013) to dense campaign GPS velocities in the region to solve for the vertical derivatives of horizontal stress (VDoHS) rates which are a substantially higher resolution expression of subsurface sources of ongoing deformation than the GPS velocities or GPS derived strain rates. Integrating the VDoHS rates gives us strain rates. Relationships between the VDoHS and strain rates allow us to calculate the variation in fault slip rate and locking depth for the identified faults; e.g., we estimate along fault variations for locking depth and slip rate for the Alpine Fault in the South Island in good agreement with previous estimates, and provide first estimates for those properties on the smaller, previously-uncharacterized faults which account for as much as 50% of the plate motion depending on location. For the first time, we note that the area between the Alpine Fault and the Main Divide of the Southern Alps is undergoing extensional areal strain, potentially indicative of gravitational collapse of the Southern Alps. The Arthur's Pass section of the Alpine Fault exhibits no shear component in the spatial derivatives of the VDoHS rates, in marked contrast to the Alpine Fault segments just northeast and southwest, suggesting that post-seismic deformation related to the 1994 Arthur's Pass earthquake is masking the signal from the Alpine Fault beneath. We characterize in detail the transfer of slip further north into the Marlborough Fault System, where we find much of the slip on the Alpine Fault passes onto the Kelly and Hope Faults, in accord with previous geological studies.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Delgado, J; Liao, J C
1992-01-01
The methodology previously developed for determining the Flux Control Coefficients [Delgado & Liao (1992) Biochem. J. 282, 919-927] is extended to the calculation of metabolite Concentration Control Coefficients. It is shown that the transient metabolite concentrations are related by a few algebraic equations, attributed to mass balance, stoichiometric constraints, quasi-equilibrium or quasi-steady states, and kinetic regulations. The coefficients in these relations can be estimated using linear regression, and can be used to calculate the Control Coefficients. The theoretical basis and two examples are discussed. Although the methodology is derived based on the linear approximation of enzyme kinetics, it yields reasonably good estimates of the Control Coefficients for systems with non-linear kinetics. PMID:1497632
A toy model for the yield of a tamped fission bomb
NASA Astrophysics Data System (ADS)
Reed, B. Cameron
2018-02-01
A simple expression is developed for estimating the yield of a tamped fission bomb, that is, a basic nuclear weapon comprising a fissile core jacketed by a surrounding neutron-reflecting tamper. This expression is based on modeling the nuclear chain reaction as a geometric progression in combination with a previously published expression for the threshold-criticality condition for such a core. The derivation is especially straightforward, as it requires no knowledge of diffusion theory and should be accessible to students of both physics and policy. The calculation can be set up as a single page spreadsheet. Application to the Little Boy and Fat Man bombs of World War II gives results in reasonable accord with published yield estimates for these weapons.
NASA Technical Reports Server (NTRS)
Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.
2012-01-01
Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.
Phase Distribution and Selection of Partially Correlated Persistent Scatterers
NASA Astrophysics Data System (ADS)
Lien, J.; Zebker, H. A.
2012-12-01
Interferometric synthetic aperture radar (InSAR) time-series methods can effectively estimate temporal surface changes induced by geophysical phenomena. However, such methods are susceptible to decorrelation due to spatial and temporal baselines (radar pass separation), changes in orbital geometries, atmosphere, and noise. These effects limit the number of interferograms that can be used for differential analysis and obscure the deformation signal. InSAR decorrelation effects may be ameliorated by exploiting pixels that exhibit phase stability across the stack of interferograms. These so-called persistent scatterer (PS) pixels are dominated by a single point-like scatterer that remains phase-stable over the spatial and temporal baseline. By identifying a network of PS pixels for use in phase unwrapping, reliable deformation measurements may be obtained even in areas of low correlation, where traditional InSAR techniques fail to produce useful observations. Many additional pixels can be added to the PS list if we are able to identify those in which a dominant scatterer exhibits partial, rather than complete, correlation across all radar scenes. In this work, we quantify and exploit the phase stability of partially correlated PS pixels. We present a new system model for producing interferometric pixel values from a complex surface backscatter function characterized by signal-to-clutter ratio (SCR). From this model, we derive the joint probabilistic distribution for PS pixel phases in a stack of interferograms as a function of SCR and spatial baselines. This PS phase distribution generalizes previous results that assume the clutter phase contribution is uncorrelated between radar passes. We verify the analytic distribution through a series of radar scattering simulations. We use the derived joint PS phase distribution with maximum-likelihood SCR estimation to analyze an area of the Hayward Fault Zone in the San Francisco Bay Area. We obtain a series of 38 interferometric images of the area from C-band ERS radar satellite passes between May 1995 and December 2000. We compare the estimated SCRs to those calculated with previously derived PS phase distributions. Finally, we examine the PS network density resulting from varying selection thresholds of SCR and compare to other PS identification techniques.
NASA Astrophysics Data System (ADS)
Tai, Amos P. K.; Val Martin, Maria
2017-11-01
Ozone air pollution and climate change pose major threats to global crop production, with ramifications for future food security. Previous studies of ozone and warming impacts on crops typically do not account for the strong ozone-temperature correlation when interpreting crop-ozone or crop-temperature relationships, or the spatial variability of crop-to-ozone sensitivity arising from varietal and environmental differences, leading to potential biases in their estimated crop losses. Here we develop an empirical model, called the partial derivative-linear regression (PDLR) model, to estimate the spatial variations in the sensitivities of wheat, maize and soybean yields to ozone exposures and temperature extremes in the US and Europe using a composite of multidecadal datasets, fully correcting for ozone-temperature covariation. We find generally larger and more spatially varying sensitivities of all three crops to ozone exposures than are implied by experimentally derived concentration-response functions used in most previous studies. Stronger ozone tolerance is found in regions with high ozone levels and high consumptive crop water use, reflecting the existence of spatial adaptation and effect of water constraints. The spatially varying sensitivities to temperature extremes also indicate stronger heat tolerance in crops grown in warmer regions. The spatial adaptation of crops to ozone and temperature we find can serve as a surrogate for future adaptation. Using the PDLR-derived sensitivities and 2000-2050 ozone and temperature projections by the Community Earth System Model, we estimate that future warming and unmitigated ozone pollution can combine to cause an average decline in US wheat, maize and soybean production by 13%, 43% and 28%, respectively, and a smaller decline for European crops. Aggressive ozone regulation is shown to offset such decline to various extents, especially for wheat. Our findings demonstrate the importance of considering ozone regulation as well as ozone and climate change adaptation (e.g., selecting heat- and ozone-tolerant cultivars, irrigation) as possible strategies to enhance future food security in response to imminent environmental threats.
AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan
Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R {sub ☉}, and 0.049 dex (an 11% error on luminosity).more » Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler-42), 254 (Kepler-45), and 571 (Kepler-186), the latter of which hosts a rocky planet orbiting in its star's habitable zone.« less
New radar-derived topography for the northern hemisphere of Mars
NASA Technical Reports Server (NTRS)
Downs, G. S.; Thompson, T. W.; Mouginis-Mark, P. J.; Zisk, S. H.
1982-01-01
Earth-based radar altimetry data for the northern equatorial belt of Mars (6 deg S-23 deg N) have recently been reduced to a common basis corresponding to the 6.1-mbar reference surface. A first look at these data indicates that the elevations of Tharsis, Elysium, and Lunae Planum are lower (by 2-5 km) than has been suggested by previous estimates. These differences show that the required amount of tectonic uplift (or constructional volcanism) for each area is less than has been previously envisioned. Atmospheric or surficial conditions are suggested which may explain the discrepancies between the radar topography and elevations measured by other techniques. The topographies of Chryse Planitia, Syrtis Major, and Valles Marineris are also described.
NASA Astrophysics Data System (ADS)
Prud'homme, Genevieve; Dobbin, Nina A.; Sun, Liu; Burnett, Richard T.; Martin, Randall V.; Davidson, Andrew; Cakmak, Sabit; Villeneuve, Paul J.; Lamsal, Lok N.; van Donkelaar, Aaron; Peters, Paul A.; Johnson, Markey
2013-12-01
Satellite remote sensing (RS) has emerged as a cutting edge approach for estimating ground level ambient air pollution. Previous studies have reported a high correlation between ground level PM2.5 and NO2 estimated by RS and measurements collected at regulatory monitoring sites. The current study examined associations between air pollution and adverse respiratory and allergic health outcomes using multi-year averages of NO2 and PM2.5 from RS and from regulatory monitoring. RS estimates were derived using satellite measurements from OMI, MODIS, and MISR instruments. Regulatory monitoring data were obtained from Canada's National Air Pollution Surveillance Network. Self-reported prevalence of doctor-diagnosed asthma, current asthma, allergies, and chronic bronchitis were obtained from the Canadian Community Health Survey (a national sample of individuals 12 years of age and older). Multi-year ambient pollutant averages were assigned to each study participant based on their six digit postal code at the time of health survey, and were used as a marker for long-term exposure to air pollution. RS derived estimates of NO2 and PM2.5 were associated with 6-10% increases in respiratory and allergic health outcomes per interquartile range (3.97 μg m-3 for PM2.5 and 1.03 ppb for NO2) among adults (aged 20-64) in the national study population. Risk estimates for air pollution and respiratory/allergic health outcomes based on RS were similar to risk estimates based on regulatory monitoring for areas where regulatory monitoring data were available (within 40 km of a regulatory monitoring station). RS derived estimates of air pollution were also associated with adverse health outcomes among participants residing outside the catchment area of the regulatory monitoring network (p < 0.05). The consistency between risk estimates based on RS and regulatory monitoring as well as the associations between air pollution and health among participants living outside the catchment area for regulatory monitoring suggest that RS can provide useful estimates of long-term ambient air pollution in epidemiologic studies. This is particularly important in rural communities and other areas where monitoring and modeled air pollution data are limited or unavailable.
Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.
2015-01-01
A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101
Hilley, George E; Porder, Stephen
2008-11-04
Global silicate weathering drives long-time-scale fluctuations in atmospheric CO(2). While tectonics, climate, and rock-type influence silicate weathering, it is unclear how these factors combine to drive global rates. Here, we explore whether local erosion rates, GCM-derived dust fluxes, temperature, and water balance can capture global variation in silicate weathering. Our spatially explicit approach predicts 1.9-4.6 x 10(13) mols of Si weathered globally per year, within a factor of 4-10 of estimates of global silicate fluxes derived from riverine measurements. Similarly, our watershed-based estimates are within a factor of 4-18 (mean of 5.3) of the silica fluxes measured in the world's ten largest rivers. Eighty percent of total global silicate weathering product traveling as dissolved load occurs within a narrow range (0.01-0.5 mm/year) of erosion rates. Assuming each mol of Mg or Ca reacts with 1 mol of CO(2), 1.5-3.3 x 10(8) tons/year of CO(2) is consumed by silicate weathering, consistent with previously published estimates. Approximately 50% of this drawdown occurs in the world's active mountain belts, emphasizing the importance of tectonic regulation of global climate over geologic timescales.
Sheppard, Sean C; Hickling, Edward J; Earleywine, Mitch; Hoyt, Tim; Russo, Amanda R; Donati, Matthew R; Kip, Kevin E
2015-11-01
Stigma associated with disclosing military sexual trauma (MST) makes estimating an accurate base rate difficult. Anonymous assessment may help alleviate stigma. Although anonymous research has found higher rates of male MST, no study has evaluated whether providing anonymity sufficiently mitigates the impact of stigma on accurate reporting. This study used the unmatched count technique (UCT), a form of randomized response techniques, to gain information about the accuracy of base rate estimates of male MST derived via anonymous assessment of Operation Enduring Freedom (OEF)/Operation Iraqi Freedom (OIF) combat veterans. A cross-sectional convenience sample of 180 OEF/OIF male combat veterans, recruited via online websites for military populations, provided data about history of MST via traditional anonymous self-report and the UCT. The UCT revealed a rate of male MST more than 15 times higher than the rate derived via traditional anonymous assessment (1.1% vs. 17.2%). These data suggest that anonymity does not adequately mitigate the impact of stigma on disclosure of male MST. Results, though preliminary, suggest that published rates of male MST may substantially underestimate the true rate of this problem. The UCT has significant potential to improve base rate estimation of sensitive behaviors in the military. (c) 2015 APA, all rights reserved).
Weghorst, Jennifer A
2007-04-01
The main objective of this study was to estimate the population density and demographic structure of spider monkeys living in wet forest in the vicinity of Sirena Biological Station, Corcovado National Park, Costa Rica. Results of a 14-month line-transect survey showed that spider monkeys of Sirena have one of the highest population densities ever recorded for this genus. Density estimates varied, however, depending on the method chosen to estimate transect width. Data from behavioral monitoring were available to compare density estimates derived from the survey, providing a check of the survey's accuracy. A combination of factors has most probably contributed to the high density of Ateles, including habitat protection within a national park and high diversity of trees of the fig family, Moraceae. Although natural densities of spider monkeys at Sirena are substantially higher than those recorded at most other sites and in previous studies at this site, mean subgroup size and age ratios were similar to those determined in previous studies. Sex ratios were similar to those of other sites with high productivity. Although high densities of preferred fruit trees in the wet, productive forests of Sirena may support a dense population of spider monkeys, other demographic traits recorded at Sirena fall well within the range of values recorded elsewhere for the species.
Heuveline, P
1998-03-01
Estimates of mortality in Camabodia during the Khmer Rouge regime (1975-79) range from 20,000 deaths according to former Khmer Rouge sources, to over three million victims according to Vietnamese government sources. This paper uses an unusual data source - the 1992 electoral lists registered by the United Nations - to estimate the population size after the Khmer Rouge regime and the extent of "excess" mortality in the 1970s. These data also provide the first breakdown of population by single year of age, which allows analysis of the age structure of "excess" mortality and inference of the relative importance of violence as a cause of death in that period. The estimates derived here are more comparable with the higher estimates made in the past. In addition, the analysis of likely causes of death that could have generated the age pattern of "excess" mortality clearly shows a larger contribution of direct or violent mortality than has been previously recognized.
Estimating a child's age from an image using whole body proportions.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
The use and distribution of child pornography is an increasing problem. Forensic anthropologists are often asked to estimate a child's age from a photograph. Previous studies have attempted to estimate the age of children from photographs using ratios of the face. Here, we propose to include body measurement ratios into age estimates. A total of 1603 boys and 1833 girls aged 5-16 years were measured over a 10-year period. They are 'Cape Coloured' children from South Africa. Their age was regressed on ratios derived from anthropometric measurements of the head as well as the body. Multiple regression equations including four ratios for each sex (head height to shoulder and hip width, knee width, leg length and trunk length) have a standard error of 1.6-1.7 years. The error is of the same order as variation of differences between biological and chronological ages of the children. Thus, the error cannot be minimised any further as it is a direct reflection of a naturally occurring phenomenon.
Polar bears in the Beaufort Sea: A 30-year mark-recapture case history
Amstrup, Steven C.; McDonald, T.L.; Stirling, I.
2001-01-01
Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.
Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon
2010-10-01
An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.
The use of resighting data to estimate the rate of population growth of the snail kite in Florida
Dreitz, V.J.; Nichols, J.D.; Hines, J.E.; Bennetts, R.E.; Kitchens, W.M.; DeAngelis, D.L.
2002-01-01
The rate of population growth (lambda) is an important demographic parameter used to assess the viability of a population and to develop management and conservation agendas. We examined the use of resighting data to estimate lambda for the snail kite population in Florida from 1997-2000. The analyses consisted of (1) a robust design approach that derives an estimate of lambda from estimates of population size and (2) the Pradel (1996) temporal symmetry (TSM) approach that directly estimates lambda using an open-population capture-recapture model. Besides resighting data, both approaches required information on the number of unmarked individuals that were sighted during the sampling periods. The point estimates of lambda differed between the robust design and TSM approaches, but the 95% confidence intervals overlapped substantially. We believe the differences may be the result of sparse data and do not indicate the inappropriateness of either modelling technique. We focused on the results of the robust design because this approach provided estimates for all study years. Variation among these estimates was smaller than levels of variation among ad hoc estimates based on previously reported index statistics. We recommend that lambda of snail kites be estimated using capture-resighting methods rather than ad hoc counts.
Experimental Determination of the Permeability in the Lacunar-Canalicular Porosity of Bone
Gailani, Gaffar; Benalla, Mohammed; Mahamud, Rashal; Cowin, Stephen C.; Cardoso, Luis
2010-01-01
Permeability of the mineralized bone tissue is a critical element in understanding fluid flow occurring in the lacunar-canalicular porosity (PLC) compartment of bone and its role in bone nutrition and mechanotransduction. However, the estimation of bone permeability at the tissue level is affected by the influence of the vascular porosity (PV) in macroscopic samples containing several osteons. In this communication, both analytical and experimental approaches are proposed to estimate the lacunar-canalicular permeability in a single osteon. Data from an experimental stress-relaxation test in a single osteon is used to derive the PLC permeability by curve fitting to theoretical results from a compressible transverse isotropic poroelastic model of a porous annular disk under a ramp loading history (Cowin and Mehrabadi 2007; Gailani and Cowin 2008). The PLC tissue intrinsic permeability in the radial direction of the osteon was found to be dependent on the strain rate used and within the range of O(10−24)−O(10−25). The reported values of PLC permeability are in reasonable agreement with previously reported values derived using FEA and nanoindentation approaches. PMID:19831477
Organic carbon burial in global lakes and reservoirs
Mendonça, Raquel; Müller, Roger A.; Clow, David W.; Verpoorter, Charles; Raymond, Peter; Tranvik, Lars; Sobek, Sebastian
2017-01-01
Burial in sediments removes organic carbon (OC) from the short-term biosphere-atmosphere carbon (C) cycle, and therefore prevents greenhouse gas production in natural systems. Although OC burial in lakes and reservoirs is faster than in the ocean, the magnitude of inland water OC burial is not well constrained. Here we generate the first global-scale and regionally resolved estimate of modern OC burial in lakes and reservoirs, deriving from a comprehensive compilation of literature data. We coupled statistical models to inland water area inventories to estimate a yearly OC burial of 0.15 (range, 0.06–0.25) Pg C, of which ~40% is stored in reservoirs. Relatively higher OC burial rates are predicted for warm and dry regions. While we report lower burial than previously estimated, lake and reservoir OC burial corresponded to ~20% of their C emissions, making them an important C sink that is likely to increase with eutrophication and river damming.
V2676 Oph: Estimating Physical Parameters of a Moderately Fast Nova
NASA Astrophysics Data System (ADS)
Raj, A.; Pavana, M.; Kamath, U. S.; Anupama, G. C.; Walter, F. M.
2018-03-01
Using our previously reported observations, we derive some physical parameters of the moderately fast nova V2676 Oph 2012 #1. The best-fit Cloudy model of the nebular spectrum obtained on 2015 May 8 shows a hot white dwarf source with TBB≍1.0×105 K having a luminosity of 1.0×1038 erg/s. Our abundance analysis shows that the ejecta are significantly enhanced relative to solar, He/H=2.14, O/H=2.37, S/H=6.62 and Ar/H=3.25. The ejecta mass is estimated to be 1.42×10-5 M⊙. The nova showed a pronounced dust formation phase after 90 d from discovery. The J-H and H-K colors were very large as compared to other molecule- and dust-forming novae in recent years. The dust temperature and mass at two epochs have been estimated from spectral energy distribution fits to infrared photometry.
James, Eric P.; Benjamin, Stanley G.; Marquis, Melinda
2016-10-28
A new gridded dataset for wind and solar resource estimation over the contiguous United States has been derived from hourly updated 1-h forecasts from the National Oceanic and Atmospheric Administration High-Resolution Rapid Refresh (HRRR) 3-km model composited over a three-year period (approximately 22 000 forecast model runs). The unique dataset features hourly data assimilation, and provides physically consistent wind and solar estimates for the renewable energy industry. The wind resource dataset shows strong similarity to that previously provided by a Department of Energy-funded study, and it includes estimates in southern Canada and northern Mexico. The solar resource dataset represents anmore » initial step towards application-specific fields such as global horizontal and direct normal irradiance. This combined dataset will continue to be augmented with new forecast data from the advanced HRRR atmospheric/land-surface model.« less
Formation of moon induced gaps in dense planetary rings
NASA Astrophysics Data System (ADS)
Grätz, F.; Seiß, M.; Spahn, F.
2017-09-01
Recent works have shown that bodies embedded in planetary rings create S-shaped density modula- tions called propellers if their mass deceeds a certain threshold or cause a gap around the entire circumference of the disc if the embedded bodies mass exceeds it. Two counteracting physical processes govern the dynamics and determine what structure is created: The gravitational disturber excerts a torque on nearby disc particles, sweeping them away from itself on both sides thus depleting the discs density and forming a gap. Diffusive spreading of the disc material due to collisions counteracts the gravitational scattering and has the tendency to fill the gap. We develop a nonlinear diffusion model that accounts for those two counteracting processes and describes the azimutally averaged surface density profile an embedded moon creates in planetary rings. The gaps width depends on the moons mass, its radial position and the rings viscosity allowing us to estimate the rings viscosity in the vicinity of the Encke and Keeler gap in Saturns A-Ring and compare it to previous measurements. We show that for the Keeler gap the time derivative of the semi-major axis as derived by Goldreich and Tremaine 1980 is underestimated yielding an underestimated viscosity for the ring. We therefore derive a corrected expression for said time derivative by fitting the solutions of Hill's equations for an ensemble of test particles. Furthermore we estimate the masses for potentionally unseen moonlets in the C-Ring and Cassini division.
NASA Astrophysics Data System (ADS)
Lasslop, G.; Reichstein, M.; Papale, D.; Richardson, A. D.
2009-12-01
The FLUXNET database provides measurements of the net ecosystem exchange (NEE) of carbon across vegetation types and climate regions. To simplify the interpretation in terms of processes the net exchange is frequently split up into the two main components: gross primary production (GPP) and ecosystem respiration (Reco). A strong relation between these two fluxes related derived from eddy covariance data was found across temporal scales and is to be expected as variation in recent photosynthesis is known to be correlated with root respiration; plants use energy from photosynthesis to drive the metabolism. At long time scales, substrate availability (constrained by past productivity) limits the whole-ecosystem respiration. Previous studies exploring this relationship relied on GPP and Reco estimates derived from the same data, this may lead to spurious correlation that must not be interpreted ecologically. In this study we use two estimates derived from disjunct datasets, one based on daytime data, the other on nighttime data and explore the reliability and robustness of this relationship. We find distinct relationship between the two, varying between vegetation types but also across temporal and spatial scales. We also infer that spatial and temporal variability of net ecosystem exchange is driven by GPP in many cases. Exceptions to this rule include for example disturbed sites. We advocate that for model calibration and evaluation not only the fluxes itself but also robust patterns between fluxes that can be extracted from the database, for instance between the flux components, should be considered.
Poppenga, Sandra K.; Palaseanu-Lovejoy, Monica; Gesch, Dean B.; Danielson, Jeffrey J.; Tyler, Dean J.
2018-04-16
Satellite-derived near-shore bathymetry (SDB) is becoming an increasingly important method for assessing vulnerability to climate change and natural hazards in low-lying atolls of the northern tropical Pacific Ocean. Satellite imagery has become a cost-effective means for mapping near-shore bathymetry because ships cannot collect soundings safely while operating close to the shore. Also, green laser light detection and ranging (lidar) acquisitions are expensive in remote locations. Previous research has demonstrated that spectral band ratio-based techniques, commonly called the natural logarithm approach, may lead to more precise measurements and modeling of bathymetry because of the phenomenon that different substrates at the same depth have approximately equal ratio values. The goal of this research was to apply the band ratio technique to Landsat 8 at-sensor radiance imagery and WorldView-3 atmospherically corrected imagery in the coastal waters surrounding the Majuro Atoll, Republic of the Marshall Islands, to derive near-shore bathymetry that could be incorporated into a seamless topobathymetric digital elevation model of Majuro. Attenuation of light within the water column was characterized by measuring at-sensor radiance and reflectance at different depths and calculating an attenuation coefficient. Bathymetric lidar data, collected by the U.S. Naval Oceanographic Office in 2006, were used to calibrate the SDB results. The bathymetric lidar yielded a strong linear relation with water depths. The Landsat 8-derived SDB estimates derived from the blue/green band ratio exhibited a water attenuation extinction depth of 6 meters with a coefficient of determination R2=0.9324. Estimates derived from the coastal/red band ratio had an R2=0.9597. At the same extinction depth, SDB estimates derived from WorldView-3 imagery exhibited an R2=0.9574. Because highly dynamic coastal shorelines can be affected by erosion, wetland loss, hurricanes, sea-level rise, urban development, and population growth, consistent bathymetric data are needed to better understand sensitive coastal land/water interfaces in areas subject to coastal disasters.
Spatial Distribution of Io's Neutral Oxygen Cloud Observed by Hisaki
NASA Astrophysics Data System (ADS)
Koga, Ryoichi; Tsuchiya, Fuminori; Kagitani, Masato; Sakanoi, Takeshi; Yoneda, Mizuki; Yoshioka, Kazuo; Yoshikawa, Ichiro; Kimura, Tomoki; Murakami, Go; Yamazaki, Atsushi; Smith, H. Todd; Bagenal, Fran
2018-05-01
We report on the spatial distribution of a neutral oxygen cloud surrounding Jupiter's moon Io and along Io's orbit observed by the Hisaki satellite. Atomic oxygen and sulfur in Io's atmosphere escape from the exosphere mainly through atmospheric sputtering. Some of the neutral atoms escape from Io's gravitational sphere and form neutral clouds around Jupiter. The extreme ultraviolet spectrograph called EXCEED (Extreme Ultraviolet Spectroscope for Exospheric Dynamics) installed on the Japan Aerospace Exploration Agency's Hisaki satellite observed the Io plasma torus continuously in 2014-2015, and we derived the spatial distribution of atomic oxygen emissions at 130.4 nm. The results show that Io's oxygen cloud is composed of two regions, namely, a dense region near Io and a diffuse region with a longitudinally homogeneous distribution along Io's orbit. The dense region mainly extends on the leading side of Io and inside of Io's orbit. The emissions spread out to 7.6 Jupiter radii (RJ). Based on Hisaki observations, we estimated the radial distribution of the atomic oxygen number density and oxygen ion source rate. The peak atomic oxygen number density is 80 cm-3, which is spread 1.2 RJ in the north-south direction. We found more oxygen atoms inside Io's orbit than a previous study. We estimated the total oxygen ion source rate to be 410 kg/s, which is consistent with the value derived from a previous study that used a physical chemistry model based on Hisaki observations of ultraviolet emission ions in the Io plasma torus.
Dielectric properties of Asteroid Vesta's surface as constrained by Dawn VIR observations
NASA Astrophysics Data System (ADS)
Palmer, Elizabeth M.; Heggy, Essam; Capria, Maria T.; Tosi, Federico
2015-12-01
Earth and orbital-based radar observations of asteroids provide a unique opportunity to characterize surface roughness and the dielectric properties of their surfaces, as well as potentially explore some of their shallow subsurface physical properties. If the dielectric and topographic properties of asteroid's surfaces are defined, one can constrain their surface textural characteristics as well as potential subsurface volatile enrichment using the observed radar backscatter. To achieve this objective, we establish the first dielectric model of asteroid Vesta for the case of a dry, volatile-poor regolith-employing an analogy to the dielectric properties of lunar soil, and adjusted for the surface densities and temperatures deduced from Dawn's Visible and InfraRed mapping spectrometer (VIR). Our model suggests that the real part of the dielectric constant at the surface of Vesta is relatively constant, ranging from 2.3 to 2.5 from the night- to day-side of Vesta, while the loss tangent shows slight variation as a function of diurnal temperature, ranging from 6 × 10-3 to 8 × 10-3. We estimate the surface porosity to be ∼55% in the upper meter of the regolith, as derived from VIR observations. This is ∼12% higher than previous estimation of porosity derived from previous Earth-based X- and S-band radar observation. We suggest that the radar backscattering properties of asteroid Vesta will be mainly driven by the changes in surface roughness rather than potential dielectric variations in the upper regolith in the X- and S-band.
NASA Technical Reports Server (NTRS)
Blanchard, R. C.; Walberg, G. D.
1980-01-01
Results of an investigation to determine the full scale drag coefficient in the high speed, low density regime of the Viking lander capsule 1 entry vehicle are presented. The principal flight data used in the study were from onboard pressure, mass spectrometer, and accelerometer instrumentation. The hypersonic continuum flow drag coefficient was unambiguously obtained from pressure and accelerometer data; the free molecule flow drag coefficient was indirectly estimated from accelerometer and mass spectrometer data; the slip flow drag coefficient variation was obtained from an appropriate scaling of existing experimental sphere data. Comparison of the flight derived drag hypersonic continuum flow regime except for Reynolds numbers from 1000 to 100,000, for which an unaccountable difference between flight and ground test data of about 8% existed. The flight derived drag coefficients in the free molecule flow regime were considerably larger than those previously calculated with classical theory. The general character of the previously determined temperature profile was not changed appreciably by the results of this investigation; however, a slightly more symmetrical temperature variation at the highest altitudes was obtained.
Hamaker constants of iron oxide nanoparticles.
Faure, Bertrand; Salazar-Alvarez, German; Bergström, Lennart
2011-07-19
The Hamaker constants for iron oxide nanoparticles in various media have been calculated using Lifshitz theory. Expressions for the dielectric responses of three iron oxide phases (magnetite, maghemite, and hematite) were derived from recently published optical data. The nonretarded Hamaker constants for the iron oxide nanoparticles interacting across water, A(1w1) = 33 - 39 zJ, correlate relatively well with previous reports, whereas the calculated values in nonpolar solvents (hexane and toluene), A(131) = 9 - 29 zJ, are much lower than the previous estimates, particularly for magnetite. The magnitude of van der Waals interactions varies significantly between the studied phases (magnetite < maghemite < hematite), which highlights the importance of a thorough characterization of the particles. The contribution of magnetic dispersion interactions for particle sizes in the superparamagnetic regime was found to be negligible. Previous conjectures related to colloidal stability and self-assembly have been revisited on the basis of the new Lifshitz values of the Hamaker constants.
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
Soybean Crop Area Estimation and Mapping in Mato Grosso State, Brazil
NASA Astrophysics Data System (ADS)
Gusso, A.; Ducati, J. R.
2012-07-01
Evaluation of the MODIS Crop Detection Algorithm (MCDA) procedure for estimating historical planted soybean crop areas was done on fields in Mato Grosso State, Brazil. MCDA is based on temporal profiles of EVI (Enhanced Vegetation Index) derived from satellite data of the MODIS (Moderate Resolution Imaging Spectroradiometer) imager, and was previously developed for soybean area estimation in Rio Grande do Sul State, Brazil. According to the MCDA approach, in Mato Grosso soybean area estimates can be provided in December (1st forecast), using images from the sowing period, and in February (2nd forecast), using images from sowing and maximum crop development period. The results obtained by the MCDA were compared with Brazilian Institute of Geography and Statistics (IBGE) official estimates of soybean area at municipal level. Coefficients of determination were between 0.93 and 0.98, indicating a good agreement, and also the suitability of MCDA to estimations performed in Mato Grosso State. On average, the MCDA results explained 96% of the variation of the data estimated by the IBGE. In this way, MCDA calibration was able to provide annual thematic soybean maps, forecasting the planted area in the State, with results which are comparable to the official agricultural statistics.
NASA Astrophysics Data System (ADS)
Bai, Heming; Gong, Cheng; Wang, Minghuai; Zhang, Zhibo; L'Ecuyer, Tristan
2018-02-01
Precipitation susceptibility to aerosol perturbation plays a key role in understanding aerosol-cloud interactions and constraining aerosol indirect effects. However, large discrepancies exist in the previous satellite estimates of precipitation susceptibility. In this paper, multi-sensor aerosol and cloud products, including those from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), CloudSat, Moderate Resolution Imaging Spectroradiometer (MODIS), and Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) from June 2006 to April 2011 are analyzed to estimate precipitation frequency susceptibility SPOP, precipitation intensity susceptibility SI, and precipitation rate susceptibility SR in warm marine clouds. We find that SPOP strongly depends on atmospheric stability, with larger values under more stable environments. Our results show that precipitation susceptibility for drizzle (with a -15 dBZ rainfall threshold) is significantly different than that for rain (with a 0 dBZ rainfall threshold). Onset of drizzle is not as readily suppressed in warm clouds as rainfall while precipitation intensity susceptibility is generally smaller for rain than for drizzle. We find that SPOP derived with respect to aerosol index (AI) is about one-third of SPOP derived with respect to cloud droplet number concentration (CDNC). Overall, SPOP demonstrates relatively robust features throughout independent liquid water path (LWP) products and diverse rain products. In contrast, the behaviors of SI and SR are subject to LWP or rain products used to derive them. Recommendations are further made for how to better use these metrics to quantify aerosol-cloud-precipitation interactions in observations and models.
NASA Technical Reports Server (NTRS)
Haering, E. A., Jr.; Burcham, F. W., Jr.
1984-01-01
A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.
Asteroid mass estimation with Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.
Hohenlohe, Paul A.; Day, Mitch D.; Amish, Stephen J.; Miller, Michael R.; Kamps-Hughes, Nick; Boyer, Matthew C.; Muhlfeld, Clint C.; Allendorf, Fred W.; Johnson, Eric A.; Luikart, Gordon
2013-01-01
Rapid and inexpensive methods for genomewide single nucleotide polymorphism (SNP) discovery and genotyping are urgently needed for population management and conservation. In hybridized populations, genomic techniques that can identify and genotype thousands of species-diagnostic markers would allow precise estimates of population- and individual-level admixture as well as identification of 'super invasive' alleles, which show elevated rates of introgression above the genomewide background (likely due to natural selection). Techniques like restriction-site-associated DNA (RAD) sequencing can discover and genotype large numbers of SNPs, but they have been limited by the length of continuous sequence data they produce with Illumina short-read sequencing. We present a novel approach, overlapping paired-end RAD sequencing, to generate RAD contigs of >300–400 bp. These contigs provide sufficient flanking sequence for design of high-throughput SNP genotyping arrays and strict filtering to identify duplicate paralogous loci. We applied this approach in five populations of native westslope cutthroat trout that previously showed varying (low) levels of admixture from introduced rainbow trout (RBT). We produced 77 141 RAD contigs and used these data to filter and genotype 3180 previously identified species-diagnostic SNP loci. Our population-level and individual-level estimates of admixture were generally consistent with previous microsatellite-based estimates from the same individuals. However, we observed slightly lower admixture estimates from genomewide markers, which might result from natural selection against certain genome regions, different genomic locations for microsatellites vs. RAD-derived SNPs and/or sampling error from the small number of microsatellite loci (n = 7). We also identified candidate adaptive super invasive alleles from RBT that had excessively high admixture proportions in hybridized cutthroat trout populations.
NASA Astrophysics Data System (ADS)
Bansal, Dipanshu; Aref, Amjad; Dargush, Gary; Delaire, Olivier
2016-09-01
Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally-derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. We illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound \\text{FeSi} over a wide range of temperature. Results agree well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.
Validation of engineering methods for predicting hypersonic vehicle controls forces and moments
NASA Technical Reports Server (NTRS)
Maughmer, M.; Straussfogel, D.; Long, L.; Ozoroski, L.
1991-01-01
This work examines the ability of the aerodynamic analysis methods contained in an industry standard conceptual design code, the Aerodynamic Preliminary Analysis System (APAS II), to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds. Predicted control forces and moments generated by various control effectors are compared with previously published wind-tunnel and flight-test data for three vehicles: the North American X-15, a hypersonic research airplane concept, and the Space Shuttle Orbiter. Qualitative summaries of the results are given for each force and moment coefficient and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage.
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Stephens, G. L.; Campbell, G. G.
1980-01-01
The annual and seasonal averaged Earth atmosphere radiation budgets derived from the most complete set of satellite observations available are presented. The budgets were derived from a composite of 48 monthly mean radiation budget maps. Annually and seasonally averaged radiation budgets are presented as global averages and zonal averages. The geographic distribution of the various radiation budget quantities is described. The annual cycle of the radiation budget was analyzed and the annual variability of net flux was shown to be largely dominated by the regular semi and annual cycles forced by external Earth-Sun geometry variations. Radiative transfer calculations were compared to the observed budget quantities and surface budgets were additionally computed with particular emphasis on discrepancies that exist between the present computations and previous surface budget estimates.
Integrating animal movement with habitat suitability for estimating dynamic landscape connectivity
van Toor, Mariëlle L.; Kranstauber, Bart; Newman, Scott H.; Prosser, Diann J.; Takekawa, John Y.; Technitis, Georgios; Weibel, Robert; Wikelski, Martin; Safi, Kamran
2018-01-01
Context High-resolution animal movement data are becoming increasingly available, yet having a multitude of empirical trajectories alone does not allow us to easily predict animal movement. To answer ecological and evolutionary questions at a population level, quantitative estimates of a species’ potential to link patches or populations are of importance. Objectives We introduce an approach that combines movement-informed simulated trajectories with an environment-informed estimate of the trajectories’ plausibility to derive connectivity. Using the example of bar-headed geese we estimated migratory connectivity at a landscape level throughout the annual cycle in their native range. Methods We used tracking data of bar-headed geese to develop a multi-state movement model and to estimate temporally explicit habitat suitability within the species’ range. We simulated migratory movements between range fragments, and calculated a measure we called route viability. The results are compared to expectations derived from published literature. Results Simulated migrations matched empirical trajectories in key characteristics such as stopover duration. The viability of the simulated trajectories was similar to that of the empirical trajectories. We found that, overall, the migratory connectivity was higher within the breeding than in wintering areas, corroborating previous findings for this species. Conclusions We show how empirical tracking data and environmental information can be fused for meaningful predictions of animal movements throughout the year and even outside the spatial range of the available data. Beyond predicting migratory connectivity, our framework will prove useful for modelling ecological processes facilitated by animal movement, such as seed dispersal or disease ecology.
Estimating Velocities of Glaciers Using Sentinel-1 SAR Imagery
NASA Astrophysics Data System (ADS)
Gens, R.; Arnoult, K., Jr.; Friedl, P.; Vijay, S.; Braun, M.; Meyer, F. J.; Gracheva, V.; Hogenson, K.
2017-12-01
In an international collaborative effort, software has been developed to estimate the velocities of glaciers by using Sentinel-1 Synthetic Aperture Radar (SAR) imagery. The technique, initially designed by the University of Erlangen-Nuremberg (FAU), has been previously used to quantify spatial and temporal variabilities in the velocities of surging glaciers in the Pakistan Karakoram. The software estimates surface velocities by first co-registering image pairs to sub-pixel precision and then by estimating local offsets based on cross-correlation. The Alaska Satellite Facility (ASF) at the University of Alaska Fairbanks (UAF) has modified the software to make it more robust and also capable of migration into the Amazon Cloud. Additionally, ASF has implemented a prototype that offers the glacier tracking processing flow as a subscription service as part of its Hybrid Pluggable Processing Pipeline (HyP3). Since the software is co-located with ASF's cloud-based Sentinel-1 archive, processing of large data volumes is now more efficient and cost effective. Velocity maps are estimated for Single Look Complex (SLC) SAR image pairs and a digital elevation model (DEM) of the local topography. A time series of these velocity maps then allows the long-term monitoring of these glaciers. Due to the all-weather capabilities and the dense coverage of Sentinel-1 data, the results are complementary to optically generated ones. Together with the products from the Global Land Ice Velocity Extraction project (GoLIVE) derived from Landsat 8 data, glacier speeds can be monitored more comprehensively. Examples from Sentinel-1 SAR-derived results are presented along with optical results for the same glaciers.
NASA Astrophysics Data System (ADS)
Izett, Jonathan G.; Fennel, Katja
2018-02-01
Rivers deliver large amounts of fresh water, nutrients, and other terrestrially derived materials to the coastal ocean. Where inputs accumulate on the shelf, harmful effects such as hypoxia and eutrophication can result. In contrast, where export to the open ocean is efficient riverine inputs contribute to global biogeochemical budgets. Assessing the fate of riverine inputs is difficult on a global scale. Global ocean models are generally too coarse to resolve the relatively small scale features of river plumes. High-resolution regional models have been developed for individual river plume systems, but it is impractical to apply this approach globally to all rivers. Recently, generalized parameterizations have been proposed to estimate the export of riverine fresh water to the open ocean (Izett & Fennel, 2018, https://doi.org/10.1002/2017GB005667; Sharples et al., 2017, https://doi.org/10.1002/2016GB005483). Here the relationships of Izett and Fennel, https://doi.org/10.1002/2017GB005667 are used to derive global estimates of open-ocean export of fresh water and dissolved inorganic silicate, dissolved organic carbon, and dissolved organic and inorganic phosphorus and nitrogen. We estimate that only 15-53% of riverine fresh water reaches the open ocean directly in river plumes; nutrient export is even less efficient because of processing on continental shelves. Due to geographic differences in riverine nutrient delivery, dissolved silicate is the most efficiently exported to the open ocean (7-56.7%), while dissolved inorganic nitrogen is the least efficiently exported (2.8-44.3%). These results are consistent with previous estimates and provide a simple way to parameterize export to the open ocean in global models.
Villela, D A M; Bastos, L S; DE Carvalho, L M; Cruz, O G; Gomes, M F C; Durovni, B; Lemos, M C; Saraceni, V; Coelho, F C; Codeço, C T
2017-06-01
Zika virus infection was declared a public health emergency of international concern in February 2016 in response to the outbreak in Brazil and its suspected link with congenital anomalies. In this study, we use notification data and disease natural history parameters to estimate the basic reproduction number (R 0) of Zika in Rio de Janeiro, Brazil. We also obtain estimates of R 0 of dengue from time series of dengue cases in the outbreaks registered in 2002 and 2012 in the city, when DENV-3 and DENV-4 serotypes, respectively, had just emerged. Our estimates of the basic reproduction number for Zika in Rio de Janeiro based on surveillance notifications (R 0 = 2·33, 95% CI: 1·97-2·97) were higher than those obtained for dengue in the city (year 2002: R 0 = 1·70 [1·50-2·02]; year 2012: R 0 = 1·25 [1·18-1·36]). Given the role of Aedes aegypti as vector of both the Zika and dengue viruses, we also derive R 0 of Zika as a function of both dengue reproduction number and entomological and epidemiological parameters for dengue and Zika. Using the dengue outbreaks from previous years allowed us to estimate the potential R 0 of Zika. Our estimates were closely in agreement with our first Zika's R 0 estimation from notification data. Hence, these results validate deriving the potential risk of Zika transmission in areas with recurring dengue outbreaks. Whether transmission routes other than vector-based can sustain a Zika epidemic still deserves attention, but our results suggest that the Zika outbreak in Rio de Janeiro emerged due to population susceptibility and ubiquitous presence of Ae. aegypti.
Effects of time-shifted data on flight determined stability and control derivatives
NASA Technical Reports Server (NTRS)
Steers, S. T.; Iliff, K. W.
1975-01-01
Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
Dae-Kwan Kim; Daniel M. Spotts; Donald F. Holecek
1998-01-01
This paper compares estimates of pleasure trip volume and expenditures derived from a regional telephone survey to those derived from the TravelScope mail panel survey. Significantly different estimates emerged, suggesting that survey-based estimates of pleasure trip volume and expenditures, at least in the case of the two surveys examined, appear to be affected by...
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
Lyons, Ronan A.; Kendrick, Denise; Towner, Elizabeth M.; Christie, Nicola; Macey, Steven; Coupland, Carol; Gabbe, Belinda J.
2011-01-01
Background Current methods of measuring the population burden of injuries rely on many assumptions and limited data available to the global burden of diseases (GBD) studies. The aim of this study was to compare the population burden of injuries using different approaches from the UK Burden of Injury (UKBOI) and GBD studies. Methods and Findings The UKBOI was a prospective cohort of 1,517 injured individuals that collected patient-reported outcomes. Extrapolated outcome data were combined with multiple sources of morbidity and mortality data to derive population metrics of the burden of injury in the UK. Participants were injured patients recruited from hospitals in four UK cities and towns: Swansea, Nottingham, Bristol, and Guildford, between September 2005 and April 2007. Patient-reported changes in quality of life using the EQ-5D at baseline, 1, 4, and 12 months after injury provided disability weights used to calculate the years lived with disability (YLDs) component of disability adjusted life years (DALYs). DALYs were calculated for the UK and extrapolated to global estimates using both UKBOI and GBD disability weights. Estimated numbers (and rates per 100,000) for UK population extrapolations were 750,999 (1,240) for hospital admissions, 7,982,947 (13,339) for emergency department (ED) attendances, and 22,185 (36.8) for injury-related deaths in 2005. Nonadmitted ED-treated injuries accounted for 67% of YLDs. Estimates for UK DALYs amounted to 1,771,486 (82% due to YLDs), compared with 669,822 (52% due to YLDs) using the GBD approach. Extrapolating patient-derived disability weights to GBD estimates would increase injury-related DALYs 2.6-fold. Conclusions The use of disability weights derived from patient experiences combined with additional morbidity data on ED-treated patients and inpatients suggests that the absolute burden of injury is higher than previously estimated. These findings have substantial implications for improving measurement of the national and global burden of injury. Please see later in the article for the Editors' Summary PMID:22162954
Validation of ET maps derived from MODIS imagery
NASA Astrophysics Data System (ADS)
Hong, S.; Hendrickx, J. M.; Borchers, B.
2005-12-01
In previous work we have used the New Mexico Tech implementation of the Surface Energy Balance Algorithm for Land (SEBAL-NMT) for the generation of ET maps from LandSat imagery. Comparison of these SEBAL ET estimates versus ET ground measurements using eddy covariance showed satisfactory agreement between the two methods in the heterogeneous arid landscape of the Middle Rio Grande Basin. The objective of this study is to validate SEBAL ET estimates obtained from MODIS imagery. The use of MODIS imagery is attractive since MODIS images are available at a much higher frequency than LandSat images at no cost to the user. MODIS images have a pixel size in the thermal band of 1000x1000 m which is much coarser than the 60x60 m pixel size of LandSat 7. This large pixel size precludes the use of eddy covariance measurements for validation of ET maps derived from MODIS imagery since the eddy covariance measurement is not representative of a 1000x1000 m MODIS pixel. In our experience, a typical foot print of an ET rate measured by eddy covariance on a clear day in New Mexico around 11 am is less than then thousand square meters or two orders of magnitude smaller than a MODIS thermal pixel. Therefore, we have validated ET maps derived from MODIS imagery by comparison with up-scaled ET maps derived from LandSat imagery. The results of our study demonstrate: (1) There is good agreement between ET maps derived from LandSat and MODIS images; (2) Up-scaling of LandSat ET maps over the Middle Rio Grande Basin produces ET maps that are very similar to ET maps directly derived from MODIS images; (3) ET maps derived from free MODIS imagery using SEBAL-NMT can provide reliable regional ET information for water resource managers.
Lam, Phoebe J.; Lohan, Maeve C.; Kwon, Eun Young; Hatje, Vanessa; Shiller, Alan M.; Cutter, Gregory A.; Thomas, Alex; Milne, Angela; Thomas, Helmuth; Andersson, Per S.; Porcelli, Don; Tanaka, Takahiro; Geibert, Walter; Dehairs, Frank; Garcia-Orellana, Jordi
2016-01-01
Continental shelves and shelf seas play a central role in the global carbon cycle. However, their importance with respect to trace element and isotope (TEI) inputs to ocean basins is less well understood. Here, we present major findings on shelf TEI biogeochemistry from the GEOTRACES programme as well as a proof of concept for a new method to estimate shelf TEI fluxes. The case studies focus on advances in our understanding of TEI cycling in the Arctic, transformations within a major river estuary (Amazon), shelf sediment micronutrient fluxes and basin-scale estimates of submarine groundwater discharge. The proposed shelf flux tracer is 228-radium (T1/2 = 5.75 yr), which is continuously supplied to the shelf from coastal aquifers, sediment porewater exchange and rivers. Model-derived shelf 228Ra fluxes are combined with TEI/ 228Ra ratios to quantify ocean TEI fluxes from the western North Atlantic margin. The results from this new approach agree well with previous estimates for shelf Co, Fe, Mn and Zn inputs and exceed published estimates of atmospheric deposition by factors of approximately 3–23. Lastly, recommendations are made for additional GEOTRACES process studies and coastal margin-focused section cruises that will help refine the model and provide better insight on the mechanisms driving shelf-derived TEI fluxes to the ocean. This article is part of the themed issue ‘Biological and climatic impacts of ocean trace element chemistry’. PMID:29035267
NASA Astrophysics Data System (ADS)
Charette, Matthew A.; Lam, Phoebe J.; Lohan, Maeve C.; Kwon, Eun Young; Hatje, Vanessa; Jeandel, Catherine; Shiller, Alan M.; Cutter, Gregory A.; Thomas, Alex; Boyd, Philip W.; Homoky, William B.; Milne, Angela; Thomas, Helmuth; Andersson, Per S.; Porcelli, Don; Tanaka, Takahiro; Geibert, Walter; Dehairs, Frank; Garcia-Orellana, Jordi
2016-11-01
Continental shelves and shelf seas play a central role in the global carbon cycle. However, their importance with respect to trace element and isotope (TEI) inputs to ocean basins is less well understood. Here, we present major findings on shelf TEI biogeochemistry from the GEOTRACES programme as well as a proof of concept for a new method to estimate shelf TEI fluxes. The case studies focus on advances in our understanding of TEI cycling in the Arctic, transformations within a major river estuary (Amazon), shelf sediment micronutrient fluxes and basin-scale estimates of submarine groundwater discharge. The proposed shelf flux tracer is 228-radium (T1/2 = 5.75 yr), which is continuously supplied to the shelf from coastal aquifers, sediment porewater exchange and rivers. Model-derived shelf 228Ra fluxes are combined with TEI/ 228Ra ratios to quantify ocean TEI fluxes from the western North Atlantic margin. The results from this new approach agree well with previous estimates for shelf Co, Fe, Mn and Zn inputs and exceed published estimates of atmospheric deposition by factors of approximately 3-23. Lastly, recommendations are made for additional GEOTRACES process studies and coastal margin-focused section cruises that will help refine the model and provide better insight on the mechanisms driving shelf-derived TEI fluxes to the ocean. This article is part of the themed issue 'Biological and climatic impacts of ocean trace element chemistry'.
Charette, Matthew A; Lam, Phoebe J; Lohan, Maeve C; Kwon, Eun Young; Hatje, Vanessa; Jeandel, Catherine; Shiller, Alan M; Cutter, Gregory A; Thomas, Alex; Boyd, Philip W; Homoky, William B; Milne, Angela; Thomas, Helmuth; Andersson, Per S; Porcelli, Don; Tanaka, Takahiro; Geibert, Walter; Dehairs, Frank; Garcia-Orellana, Jordi
2016-11-28
Continental shelves and shelf seas play a central role in the global carbon cycle. However, their importance with respect to trace element and isotope (TEI) inputs to ocean basins is less well understood. Here, we present major findings on shelf TEI biogeochemistry from the GEOTRACES programme as well as a proof of concept for a new method to estimate shelf TEI fluxes. The case studies focus on advances in our understanding of TEI cycling in the Arctic, transformations within a major river estuary (Amazon), shelf sediment micronutrient fluxes and basin-scale estimates of submarine groundwater discharge. The proposed shelf flux tracer is 228-radium ( T 1/2 = 5.75 yr), which is continuously supplied to the shelf from coastal aquifers, sediment porewater exchange and rivers. Model-derived shelf 228 Ra fluxes are combined with TEI/ 228 Ra ratios to quantify ocean TEI fluxes from the western North Atlantic margin. The results from this new approach agree well with previous estimates for shelf Co, Fe, Mn and Zn inputs and exceed published estimates of atmospheric deposition by factors of approximately 3-23. Lastly, recommendations are made for additional GEOTRACES process studies and coastal margin-focused section cruises that will help refine the model and provide better insight on the mechanisms driving shelf-derived TEI fluxes to the ocean.This article is part of the themed issue 'Biological and climatic impacts of ocean trace element chemistry'. © 2015 The Authors.
We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
r...
Estimation of body temperature rhythm based on heart activity parameters in daily life.
Sooyoung Sim; Heenam Yoon; Hosuk Ryou; Kwangsuk Park
2014-01-01
Body temperature contains valuable health related information such as circadian rhythm and menstruation cycle. Also, it was discovered from previous studies that body temperature rhythm in daily life is related with sleep disorders and cognitive performances. However, monitoring body temperature with existing devices during daily life is not easy because they are invasive, intrusive, or expensive. Therefore, the technology which can accurately and nonintrusively monitor body temperature is required. In this study, we developed body temperature estimation model based on heart rate and heart rate variability parameters. Although this work was inspired by previous research, we originally identified that the model can be applied to body temperature monitoring in daily life. Also, we could find out that normalized Mean heart rate (nMHR) and frequency domain parameters of heart rate variability showed better performance than other parameters. Although we should validate the model with more number of subjects and consider additional algorithms to decrease the accumulated estimation error, we could verify the usefulness of this approach. Through this study, we expect that we would be able to monitor core body temperature and circadian rhythm from simple heart rate monitor. Then, we can obtain various health related information derived from daily body temperature rhythm.
The Impact of Alzheimer's Disease on the Chinese Economy.
Keogh-Brown, Marcus R; Jensen, Henning Tarp; Arrighi, H Michael; Smith, Richard D
2016-02-01
Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. AD-related prevalence, morbidity and mortality for 2011-2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Simulated Chinese AD prevalence quadrupled during 2011-50 from 6-28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Health and macroeconomic models predict an unfolding 2011-2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies.
The Impact of Alzheimer's Disease on the Chinese Economy
Keogh-Brown, Marcus R.; Jensen, Henning Tarp; Arrighi, H. Michael; Smith, Richard D.
2015-01-01
Background Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. Methods AD-related prevalence, morbidity and mortality for 2011–2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Findings Simulated Chinese AD prevalence quadrupled during 2011–50 from 6–28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Interpretation Health and macroeconomic models predict an unfolding 2011–2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies. PMID:26981556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moerk, Anna-Karin, E-mail: anna-karin.mork@ki.s; Jonsson, Fredrik; Pharsight, a Certara company, St. Louis, MO
2009-11-01
The aim of this study was to derive improved estimates of population variability and uncertainty of physiologically based pharmacokinetic (PBPK) model parameters, especially of those related to the washin-washout behavior of polar volatile substances. This was done by optimizing a previously published washin-washout PBPK model for acetone in a Bayesian framework using Markov chain Monte Carlo simulation. The sensitivity of the model parameters was investigated by creating four different prior sets, where the uncertainty surrounding the population variability of the physiological model parameters was given values corresponding to coefficients of variation of 1%, 25%, 50%, and 100%, respectively. The PBPKmore » model was calibrated to toxicokinetic data from 2 previous studies where 18 volunteers were exposed to 250-550 ppm of acetone at various levels of workload. The updated PBPK model provided a good description of the concentrations in arterial, venous, and exhaled air. The precision of most of the model parameter estimates was improved. New information was particularly gained on the population distribution of the parameters governing the washin-washout effect. The results presented herein provide a good starting point to estimate the target dose of acetone in the working and general populations for risk assessment purposes.« less
An estimate of periodontal treatment needs in the U.S. based on epidemiologic data.
Oliver, R C; Brown, L J; Löe, H
1989-07-01
It has generally been assumed, based on previous epidemiologic and utilization studies as well as the increasing elderly population, that there would be an increasing need for periodontal treatment. Analysis of a more recent household epidemiologic survey conducted in 1981 indicates that the need for treatment of periodontitis is less than previous estimates. These epidemiologic data have been translated into treatment needs through a series of conversion rules derived from previous studies and current patterns of treatment, and applied to the 1985 U.S. population. The total periodontal services needed for scaling, surgery, and prophylaxes would require 120 to 133 million hours and $5 to $6 billion annually if the total population were treated for periodontitis over a 4-year period. Only 11% of the total hours needed would be for scaling and surgery whereas 89% would be needed for prophylaxes. Expenditures for periodontal treatment total approximately 10% of the amount being spent on dental care in 1985. On the basis of these data, it seems unlikely that there will be a substantial increase in the need for periodontal treatment in a growing and aging U.S. population. These figures represent the upper limits of treatment need and are reduced by factoring in current utilization of periodontal treatment.
An economic analysis of pregnancy resolution in Virginia: specific as to race and residence.
Liu, G G
1995-01-01
This study analyses an economic model of pregnancy resolution; that is, a model of the choice by a pregnant woman to abort her fetus or carry it to term. This analysis, using an analytical model derived from the household utility framework, adds to previous research by presenting race- and residence-specific estimates of how individual characteristics, history of abortion, and the community-based factors determine women's choices of giving birth vs. aborting. The main data for estimating the model were drawn from the 1984 vital statistics of all induced abortions and live births in the Commonwealth of Virginia. The major findings indicate that low parental education, high maternal age, previous early abortions, and the availability of abortion providers all significantly reduce the probability of choosing the live birth option. Married status and the availability of family planning clinics significantly increase the probability of the live birth option. The findings also suggest that women's choices between abortion and live birth vary substantially with race (White vs. Black) and residential (urban vs. rural) location.
The global potential of bioenergy on abandoned agriculture lands.
Campbell, J Elliott; Lobell, David B; Genova, Robert C; Field, Christopher B
2008-08-01
Converting forest lands into bioenergy agriculture could accelerate climate change by emitting carbon stored in forests, while converting food agriculture lands into bioenergy agriculture could threaten food security. Both problems are potentially avoided by using abandoned agriculture lands for bioenergy agriculture. Here we show the global potential for bioenergy on abandoned agriculture lands to be less than 8% of current primary energy demand, based on historical land use data, satellite-derived land cover data, and global ecosystem modeling. The estimated global area of abandoned agriculture is 385-472 million hectares, or 66-110% of the areas reported in previous preliminary assessments. The area-weighted mean production of above-ground biomass is 4.3 tons ha(-1) y(-1), in contrast to estimates of up to 10 tons ha(-1) y(-1) in previous assessments. The energy content of potential biomass grown on 100% of abandoned agriculture lands is less than 10% of primary energy demand for most nations in North America, Europe, and Asia, but it represents many times the energy demand in some African nations where grasslands are relatively productive and current energy demand is low.
Photometric Lunar Surface Reconstruction
NASA Technical Reports Server (NTRS)
Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.
2013-01-01
Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.
Estimating the extent of impervious surfaces and turf grass across large regions
Claggett, Peter; Irani, Frederick M.; Thompson, Renee L.
2013-01-01
The ability of researchers to accurately assess the extent of impervious and pervious developed surfaces, e.g., turf grass, using land-cover data derived from Landsat satellite imagery in the Chesapeake Bay watershed is limited due to the resolution of the data and systematic discrepancies between developed land-cover classes, surface mines, forests, and farmlands. Estimates of impervious surface and turf grass area in the Mid-Atlantic, United States that were based on 2006 Landsat-derived land-cover data were substantially lower than estimates based on more authoritative and independent sources. New estimates of impervious surfaces and turf grass area derived using land-cover data combined with ancillary information on roads, housing units, surface mines, and sampled estimates of road width and residential impervious area were up to 57 and 45% higher than estimates based strictly on land-cover data. These new estimates closely approximate estimates derived from authoritative and independent sources in developed counties.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
NASA Astrophysics Data System (ADS)
Chanard, K.; Fleitout, L.; Calais, E.; Barbot, S.; Avouac, J. P.
2016-12-01
Elastic deformation of the Earth induced by seasonal variations in hydrology is now well established. We compute the vertical and horizontal deformation induced by large variations of continental water storage at a set of 195 globally distributed continuous Global Positioning System (cGPS) stations. Seasonal loading is derived from the Gravity and Recovery Climate experiment (GRACE) equivalent water height data, where we first account for non observable degree-1 components using previous results (Swenson et al., 2010). While the vertical displacements are well predicted by the model, the horizontal components are systematically underpredicted and out-of- phase with the observations. This global result confirms previous difficulties to predict horizontal seasonal site positions at a regional scale. We discuss possible contributions to this misfit (thermal expansion, draconitic effects, etc.) and show a dramatic improvement when we derive degree-one deformation plus reference frame differences between model and observations. The fit in phase and amplitude of the seasonal deformation model to the horizontal GPS measurements is improved and the fit to the vertical component is not affected. However, the amplitude of global seasonal horizontal displacement remains slightly underpredicted. We explore several hypothesis including the validity of a purely elastic model derived from seismic estimates at an annual time scale. We show that mantle volume variations due to mineral phase transitions may play a role in the seasonal deformation and, as a by-product, use this seasonal deformation to provide a lower bound of the transient astenospheric viscosity. Our study aims at providing an accurate model for horizontal and vertical seasonal deformation of the Earth induced by variations in surface hydrology derived from GRACE.
Derivation and Validation of a Renal Risk Score for People With Type 2 Diabetes
Elley, C. Raina; Robinson, Tom; Moyes, Simon A.; Kenealy, Tim; Collins, John; Robinson, Elizabeth; Orr-Walker, Brandon; Drury, Paul L.
2013-01-01
OBJECTIVE Diabetes has become the leading cause of end-stage renal disease (ESRD). Renal risk stratification could assist in earlier identification and targeted prevention. This study aimed to derive risk models to predict ESRD events in type 2 diabetes in primary care. RESEARCH DESIGN AND METHODS The nationwide derivation cohort included adults with type 2 diabetes from the New Zealand Diabetes Cohort Study initially assessed during 2000–2006 and followed until December 2010, excluding those with pre-existing ESRD. The outcome was fatal or nonfatal ESRD event (peritoneal dialysis or hemodialysis for ESRD, renal transplantation, or death from ESRD). Risk models were developed using Cox proportional hazards models, and their performance was assessed in a separate validation cohort. RESULTS The derivation cohort included 25,736 individuals followed for up to 11 years (180,497 person-years; 86% followed for ≥5 years). At baseline, mean age was 62 years, median diabetes duration 5 years, and median HbA1c 7.2% (55 mmol/mol); 37% had albuminuria; and median estimated glomerular filtration rate (eGFR) was 77 mL/min/1.73 m2. There were 637 ESRD events (2.5%) during follow-up. Models that included sex, ethnicity, age, diabetes duration, albuminuria, serum creatinine, systolic blood pressure, HbA1c, smoking status, and previous cardiovascular disease status performed well with good discrimination and calibration in the derivation cohort and the validation cohort (n = 5,877) (C-statistics 0.89–0.92), improving predictive performance compared with previous models. CONCLUSIONS These 5-year renal risk models performed very well in two large primary care populations with type 2 diabetes. More accurate risk stratification could facilitate earlier intervention than using eGFR and/or albuminuria alone. PMID:23801726
Comparison of methods used to estimate numbers of walruses on sea ice
Udevitz, Mark S.; Gilbert, James R.; Fedoseev, Gennadii A.
2001-01-01
The US and former USSR conducted joint surveys of Pacific walruses on sea ice and at land haul-outs in 1975, 1980, 1985, and 1990. One of the difficulties in interpreting results of these surveys has been that, except for the 1990 survey, the Americans and Soviets used different methods for estimating population size from their respective portions of the sea ice data. We used data exchanged between Soviet and American scientists to compare and evaluate the two estimation procedures and to derive a set of alternative estimates from the 1975, 1980, and 1985 surveys based on a single consistent procedure. Estimation method had only a small effect on total population estimates because most walruses were found at land haul-outs. However, the Soviet method is subject to bias that depends on the distribution of the population on the sea ice and this has important implications for interpreting the ice portions of previously reported surveys for walruses and other pinniped species. We recommend that the American method be used in future surveys. Future research on survey methods for walruses should focus on other potential sources of bias and variation.
Online Kinematic and Dynamic-State Estimation for Constrained Multibody Systems Based on IMUs
Torres-Moreno, José Luis; Blanco-Claraco, José Luis; Giménez-Fernández, Antonio; Sanjurjo, Emilio; Naya, Miguel Ángel
2016-01-01
This article addresses the problems of online estimations of kinematic and dynamic states of a mechanism from a sequence of noisy measurements. In particular, we focus on a planar four-bar linkage equipped with inertial measurement units (IMUs). Firstly, we describe how the position, velocity, and acceleration of all parts of the mechanism can be derived from IMU signals by means of multibody kinematics. Next, we propose the novel idea of integrating the generic multibody dynamic equations into two variants of Kalman filtering, i.e., the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), in a way that enables us to handle closed-loop, constrained mechanisms, whose state space variables are not independent and would normally prevent the direct use of such estimators. The proposal in this work is to apply those estimators over the manifolds of allowed positions and velocities, by means of estimating a subset of independent coordinates only. The proposed techniques are experimentally validated on a testbed equipped with encoders as a means of establishing the ground-truth. Estimators are run online in real-time, a feature not matched by any previous procedure of those reported in the literature on multibody dynamics. PMID:26959027
A global estimate of the full oceanic 13C Suess effect since the preindustrial
NASA Astrophysics Data System (ADS)
Eide, Marie; Olsen, Are; Ninnemann, Ulysses S.; Eldevik, Tor
2017-03-01
We present the first estimate of the full global ocean 13C Suess effect since preindustrial times, based on observations. This has been derived by first using the method of Olsen and Ninnemann (2010) to calculate 13C Suess effect estimates on sections spanning the world ocean, which were next mapped on a global 1° × 1° grid. We find a strong 13C Suess effect in the upper 1000 m of all basins, with strongest decrease in the subtropical gyres of the Northern Hemisphere, where δ13C of dissolved inorganic carbon has decreased by more than 0.8‰ since the industrial revolution. At greater depths, a significant 13C Suess effect can only be detected in the northern parts of the North Atlantic Ocean. The relationship between the 13C Suess effect and the concentration of anthropogenic carbon varies strongly between water masses, reflecting the degree to which source waters are equilibrated with the atmospheric 13C Suess effect before sinking. Finally, we estimate a global ocean inventory of anthropogenic CO2 of 92 ± 46 Gt C. This provides an estimate that is almost independent of and consistent, within the uncertainties, with previous estimates.
Estimation of canopy carotenoid content of winter wheat using multi-angle hyperspectral data
NASA Astrophysics Data System (ADS)
Kong, Weiping; Huang, Wenjiang; Liu, Jiangui; Chen, Pengfei; Qin, Qiming; Ye, Huichun; Peng, Dailiang; Dong, Yingying; Mortimer, A. Hugh
2017-11-01
Precise estimation of carotenoid (Car) content in crops, using remote sensing data, could be helpful for agricultural resources management. Conventional methods for Car content estimation were mostly based on reflectance data acquired from nadir direction. However, reflectance acquired at this direction is highly influenced by canopy structure and soil background reflectance. Off-nadir observation is less impacted, and multi-angle viewing data are proven to contain additional information rarely exploited for crop Car content estimation. The objective of this study was to explore the potential of multi-angle observation data for winter wheat canopy Car content estimation. Canopy spectral reflectance was measured from nadir as well as from a series of off-nadir directions during different growing stages of winter wheat, with concurrent canopy Car content measurements. Correlation analyses were performed between Car content and the original and continuum removed spectral reflectance. Spectral features and previously published indices were derived from data obtained at different viewing angles and were tested for Car content estimation. Results showed that spectral features and indices obtained from backscattering directions between 20° and 40° view zenith angle had a stronger correlation with Car content than that from the nadir direction, and the strongest correlation was observed from about 30° backscattering direction. Spectral absorption depth at 500 nm derived from spectral data obtained from 30° backscattering direction was found to reduce the difference induced by plant cultivars greatly. It was the most suitable for winter wheat canopy Car estimation, with a coefficient of determination 0.79 and a root mean square error of 19.03 mg/m2. This work indicates the importance of taking viewing geometry effect into account when using spectral features/indices and provides new insight in the application of multi-angle remote sensing for the estimation of crop physiology.
Land, B R; Harris, W V; Salpeter, E E; Salpeter, M M
1984-01-01
In previous papers we studied the rising phase of a miniature endplate current (MEPC) to derive diffusion and forward rate constants controlling acetylcholine (AcCho) in the intact neuromuscular junction. The present study derives similar values (but with smaller error ranges) for these constants by including experimental results from the falling phase of the MEPC. We find diffusion to be 4 X 10(-6) cm2 s-1, slightly slower than free diffusion, forward binding to be 3.3 X 10(7) M-1 s-1, and the distance from an average release site to the nearest exit from the cleft to be 1.6 micron. We also estimate the back reaction rates. From our values we can accurately describe the shape of MEPCs under different conditions of receptor and esterase concentration. Since we suggest that unbinding is slower than isomerization, we further predict that there should be several short "closing flickers" during the total open time for an AcCho-ligated receptor channel. PMID:6584895
Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.
Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun
2018-08-01
In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kammers, Kai; Taub, Margaret A.; Ruczinski, Ingo; Martin, Joshua; Yanek, Lisa R.; Frazee, Alyssa; Gao, Yongxing; Hoyle, Dixie; Faraday, Nauder; Becker, Diane M.; Cheng, Linzhao; Wang, Zack Z.; Leek, Jeff T.; Becker, Lewis C.; Mathias, Rasika A.
2017-01-01
Previously, we have described our feeder-free, xeno-free approach to generate megakaryocytes (MKs) in culture from human induced pluripotent stem cells (iPSCs). Here, we focus specifically on the integrity of these MKs using: (1) genotype discordance between parent cell DNA to iPSC cell DNA and onward to the differentiated MK DNA; (2) genomic structural integrity using copy number variation (CNV); and (3) transcriptomic signatures of the derived MK lines compared to the iPSC lines. We detected a very low rate of genotype discordance; estimates were 0.0001%-0.01%, well below the genotyping error rate for our assay (0.37%). No CNVs were generated in the iPSCs that were subsequently passed on to the MKs. Finally, we observed highly biologically relevant gene sets as being upregulated in MKs relative to the iPSCs: platelet activation, blood coagulation, megakaryocyte development, platelet formation, platelet degranulation, and platelet aggregation. These data strongly support the integrity of the derived MK lines. PMID:28107356
Estimating Dynamical Systems: Derivative Estimation Hints from Sir Ronald A. Fisher
ERIC Educational Resources Information Center
Deboeck, Pascal R.
2010-01-01
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common…
Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region
NASA Astrophysics Data System (ADS)
Choi, P. H.; Bang, E.; Lee, J.
2016-12-01
Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.
Parish, William J; Aldridge, Arnie; Allaire, Benjamin; Ekwueme, Donatus U; Poehler, Diana; Guy, Gery P; Thomas, Cheryll C; Trogdon, Justin G
2017-11-01
To assess the burden of excessive alcohol use, researchers estimate alcohol-attributable fractions (AAFs) routinely. However, under-reporting in survey data can bias these estimates. We present an approach that adjusts for under-reporting in the estimation of AAFs, particularly within subgroups. This framework is a refinement of a previous method conducted by Rehm et al. We use a measurement error model to derive the 'true' alcohol distribution from a 'reported' alcohol distribution. The 'true' distribution leverages per-capita sales data to identify the distribution average and then identifies the shape of the distribution with self-reported survey data. Data are from the National Alcohol Survey (NAS), the National Household Survey on Drug Abuse (NHSDA) and the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). We compared our approach with previous approaches by estimating the AAF of female breast cancer cases. Compared with Rehm et al.'s approach, our refinement performs similarly under a gamma assumption. For example, among females aged 18-25 years, the two approaches produce estimates from NHSDA that are within a percentage point. However, relaxing the gamma assumption generally produces more conservative evidence. For example, among females aged 18-25 years, estimates from NHSDA based on the best-fitting distribution are only 19.33% of breast cancer cases, which is a much smaller proportion than the gamma-based estimates of approximately 28%. A refinement of Rehm et al.'s approach to adjusting for underreporting in the estimation of alcohol-attributable fractions provides more flexibility. This flexibility can avoid biases associated with failing to account for the underlying differences in alcohol consumption patterns across different study populations. Comparisons of our refinement with Rehm et al.'s approach show that results are similar when a gamma distribution is assumed. However, results are appreciably lower when the best-fitting distribution is chosen versus gamma-based results. © 2017 Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Le, Nam Q.
2018-05-01
We obtain the Hölder regularity of time derivative of solutions to the dual semigeostrophic equations in two dimensions when the initial potential density is bounded away from zero and infinity. Our main tool is an interior Hölder estimate in two dimensions for an inhomogeneous linearized Monge-Ampère equation with right hand side being the divergence of a bounded vector field. As a further application of our Hölder estimate, we prove the Hölder regularity of the polar factorization for time-dependent maps in two dimensions with densities bounded away from zero and infinity. Our applications improve previous work by G. Loeper who considered the cases of densities sufficiently close to a positive constant.
Interpretation of the Lempel-Ziv complexity measure in the context of biomedical signal analysis.
Aboy, Mateo; Hornero, Roberto; Abásolo, Daniel; Alvarez, Daniel
2006-11-01
Lempel-Ziv complexity (LZ) and derived LZ algorithms have been extensively used to solve information theoretic problems such as coding and lossless data compression. In recent years, LZ has been widely used in biomedical applications to estimate the complexity of discrete-time signals. Despite its popularity as a complexity measure for biosignal analysis, the question of LZ interpretability and its relationship to other signal parameters and to other metrics has not been previously addressed. We have carried out an investigation aimed at gaining a better understanding of the LZ complexity itself, especially regarding its interpretability as a biomedical signal analysis technique. Our results indicate that LZ is particularly useful as a scalar metric to estimate the bandwidth of random processes and the harmonic variability in quasi-periodic signals.
Global and local Joule heating effects seen by DE 2
NASA Technical Reports Server (NTRS)
Heelis, R. A.; Coley, W. R.
1988-01-01
In the altitude region between 350 and 550 km, variations in the ion temperature principally reflect similar variations in the local frictional heating produced by a velocity difference between the ions and the neutrals. Here, the distribution of the ion temperature in this altitude region is shown, and its attributes in relation to previous work on local Joule heating rates are discussed. In addition to the ion temperature, instrumentation on the DE 2 satellite also provides a measure of the ion velocity vector representative of the total electric field. From this information, the local Joule heating rate is derived. From an estimate of the height-integrated Pedersen conductivity it is also possible to estimate the global (height-integrated) Joule heating rate. Here, the differences and relationships between these various parameters are described.
Star clusters: age, metallicity and extinction from integrated spectra
NASA Astrophysics Data System (ADS)
González Delgado, Rosa M.; Cid Fernandes, Roberto
2010-01-01
Integrated optical spectra of star clusters in the Magellanic Clouds and a few Galactic globular clusters are fitted using high-resolution spectral models for single stellar populations. The goal is to estimate the age, metallicity and extinction of the clusters, and evaluate the degeneracies among these parameters. Several sets of evolutionary models that were computed with recent high-spectral-resolution stellar libraries (MILES, GRANADA, STELIB), are used as inputs to the starlight code to perform the fits. The comparison of the results derived from this method and previous estimates available in the literature allow us to evaluate the pros and cons of each set of models to determine star cluster properties. In addition, we quantify the uncertainties associated with the age, metallicity and extinction determinations resulting from variance in the ingredients for the analysis.
Lexical decision as an endophenotype for reading comprehension: An exploration of an association
NAPLES, ADAM; KATZ, LEN; GRIGORENKO, ELENA L.
2012-01-01
Based on numerous suggestions in the literature, we evaluated lexical decision (LD) as a putative endophenotype for reading comprehension by investigating heritability estimates and segregation analyses parameter estimates for both of these phenotypes. Specifically, in a segregation analysis of a large sample of families, we established that there is little to no overlap between genes contributing to LD and reading comprehension and that the genetic mechanism behind LD derived from this analysis appears to be more complex than that for reading comprehension. We conclude that in our sample, LD is not a good candidate as an endophenotype for reading comprehension, despite previous suggestions from the literature. Based on this conclusion, we discuss the role and benefit of the endophenotype approach in studies of complex human cognitive functions. PMID:23062302
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
An investigation of 11 previously unstudied open star clusters
NASA Astrophysics Data System (ADS)
Tadross, A. L.
2009-02-01
The main astrophysical properties of 11 previously unstudied open star clusters are probed with JHK Near-IR (2MASS) photometry of Cutri et al. [Cutri, R., et al., 2003. The IRSA 2MASS All-sky Point Source Catolog, NASA/IPAC Infrared Science Archive] and proper motions (NOMAD) astrometry of Zacharias et al. [Zacharias, N., Monet, D., Levine, S., Urban, S., Gaume, R., Wycoff, G., 2004. American Astro. Soc. Meeting 36, 1418]. The fundamental parameters have been derived for IC (1434, 2156); King (17, 18, 20, 23, 26); and Dias (2, 3, 4, 7, 8), for which no prior parameters are available in the literature. The clusters' centers coordinates and angular diameters are re-determined, while ages, distances, and color excesses for these clusters are estimated here for the first time.
Connecting the shadows: probing inner disk geometries using shadows in transitional disks
NASA Astrophysics Data System (ADS)
Min, M.; Stolker, T.; Dominik, C.; Benisty, M.
2017-08-01
Aims: Shadows in transitional disks are generally interpreted as signs of a misaligned inner disk. This disk is usually beyond the reach of current day high contrast imaging facilities. However, the location and morphology of the shadow features allow us to reconstruct the inner disk geometry. Methods: We derive analytic equations of the locations of the shadow features as a function of the orientation of the inner and outer disk and the height of the outer disk wall. In contrast to previous claims in the literature, we show that the position angle of the line connecting the shadows cannot be directly related to the position angle of the inner disk. Results: We show how the analytic framework derived here can be applied to transitional disks with shadow features. We use estimates of the outer disk height to put constraints on the inner disk orientation. In contrast with the results from Long et al. (2017, ApJ, 838, 62), we derive that for the disk surrounding HD 100453 the analytic estimates and interferometric observations result in a consistent picture of the orientation of the inner disk. Conclusions: The elegant consistency in our analytic framework between observation and theory strongly support both the interpretation of the shadow features as coming from a misaligned inner disk as well as the diagnostic value of near infrared interferometry for inner disk geometry.
Green, Christopher T.; Böhlke, John Karl; Bekins, Barbara A.; Phillips, Steven P.
2010-01-01
Gradients in contaminant concentrations and isotopic compositions commonly are used to derive reaction parameters for natural attenuation in aquifers. Differences between field‐scale (apparent) estimated reaction rates and isotopic fractionations and local‐scale (intrinsic) effects are poorly understood for complex natural systems. For a heterogeneous alluvial fan aquifer, numerical models and field observations were used to study the effects of physical heterogeneity on reaction parameter estimates. Field measurements included major ions, age tracers, stable isotopes, and dissolved gases. Parameters were estimated for the O2 reduction rate, denitrification rate, O2 threshold for denitrification, and stable N isotope fractionation during denitrification. For multiple geostatistical realizations of the aquifer, inverse modeling was used to establish reactive transport simulations that were consistent with field observations and served as a basis for numerical experiments to compare sample‐based estimates of “apparent” parameters with “true“ (intrinsic) values. For this aquifer, non‐Gaussian dispersion reduced the magnitudes of apparent reaction rates and isotope fractionations to a greater extent than Gaussian mixing alone. Apparent and true rate constants and fractionation parameters can differ by an order of magnitude or more, especially for samples subject to slow transport, long travel times, or rapid reactions. The effect of mixing on apparent N isotope fractionation potentially explains differences between previous laboratory and field estimates. Similarly, predicted effects on apparent O2threshold values for denitrification are consistent with previous reports of higher values in aquifers than in the laboratory. These results show that hydrogeological complexity substantially influences the interpretation and prediction of reactive transport.
NASA Technical Reports Server (NTRS)
Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.
1989-01-01
A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.
NASA Astrophysics Data System (ADS)
Thiem, Christina; Sun, Liya; Müller, Benjamin; Bernhardt, Matthias; Schulz, Karsten
2014-05-01
Despite the importance of evapotranspiration for Meteorology, Hydrology and Agronomy, obtaining area-averaged evapotranspiration estimates is cost as well as maintenance intensive: usually area-averaged evapotranspiration estimates are obtained by distributed sensor networks or remotely sensed with a scintillometer. A low cost alternative for evapotranspiration estimates are satellite images, as many of them are freely available. This approach has been proven to be worthwhile above homogeneous terrain, and typically evapotranspiration data obtained with scintillometry are applied for validation. We will extend this approach to heterogeneous terrain: evapotranspiration estimates from ASTER 2013 images will be compared to scintillometer derived evapotranspiration estimates. The goodness of the correlation will be presented as well as an uncertainty estimation for both the ASTER derived and the scintillometer derived evapotranspiration.
Optimal estimation for global ground-level fine particulate matter concentrations
NASA Astrophysics Data System (ADS)
Donkelaar, Aaron; Martin, Randall V.; Spurr, Robert J. D.; Drury, Easan; Remer, Lorraine A.; Levy, Robert C.; Wang, Jun
2013-06-01
We develop an optimal estimation (OE) algorithm based on top-of-atmosphere reflectances observed by the MODIS satellite instrument to retrieve near-surface fine particulate matter (PM2.5). The GEOS-Chem chemical transport model is used to provide prior information for the Aerosol Optical Depth (AOD) retrieval and to relate total column AOD to PM2.5. We adjust the shape of the GEOS-Chem relative vertical extinction profiles by comparison with lidar retrievals from the CALIOP satellite instrument. Surface reflectance relationships used in the OE algorithm are indexed by land type. Error quantities needed for this OE algorithm are inferred by comparison with AOD observations taken by a worldwide network of sun photometers (AERONET) and extended globally based upon aerosol speciation and cross correlation for simulated values, and upon land type for observational values. Significant agreement in PM2.5 is found over North America for 2005 (slope = 0.89; r = 0.82; 1-σ error = 1 µg/m3 + 27%), with improved coverage and correlation relative to previous work for the same region and time period, although certain subregions, such as the San Joaquin Valley of California are better represented by previous estimates. Independently derived error estimates of the OE PM2.5 values at in situ locations over North America (of ±(2.5 µg/m3 + 31%) and Europe of ±(3.5 µg/m3 + 30%) are corroborated by comparison with in situ observations, although globally (error estimates of ±(3.0 µg/m3 + 35%), may be underestimated. Global population-weighted PM2.5 at 50% relative humidity is estimated as 27.8 µg/m3 at 0.1° × 0.1° resolution.
Probabilistic reanalysis of twentieth-century sea-level rise.
Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X
2015-01-22
Estimating and accounting for twentieth-century global mean sea level (GMSL) rise is critical to characterizing current and future human-induced sea-level change. Several previous analyses of tide gauge records--employing different methods to accommodate the spatial sparsity and temporal incompleteness of the data and to constrain the geometry of long-term sea-level change--have concluded that GMSL rose over the twentieth century at a mean rate of 1.6 to 1.9 millimetres per year. Efforts to account for this rate by summing estimates of individual contributions from glacier and ice-sheet mass loss, ocean thermal expansion, and changes in land water storage fall significantly short in the period before 1990. The failure to close the budget of GMSL during this period has led to suggestions that several contributions may have been systematically underestimated. However, the extent to which the limitations of tide gauge analyses have affected estimates of the GMSL rate of change is unclear. Here we revisit estimates of twentieth-century GMSL rise using probabilistic techniques and find a rate of GMSL rise from 1901 to 1990 of 1.2 ± 0.2 millimetres per year (90% confidence interval). Based on individual contributions tabulated in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, this estimate closes the twentieth-century sea-level budget. Our analysis, which combines tide gauge records with physics-based and model-derived geometries of the various contributing signals, also indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records.The increase in rate relative to the 1901-90 trend is accordingly larger than previously thought; this revision may affect some projections of future sea-level rise.
Slattery, Richard N.; Asquith, William H.; Gordon, John D.
2017-02-15
IntroductionIn 2016, the U.S. Geological Survey (USGS), in cooperation with the San Antonio Water System, began a study to refine previously derived estimates of groundwater outflows from Medina and Diversion Lakes in south-central Texas near San Antonio. When full, Medina and Diversion Lakes (hereinafter referred to as the Medina/Diversion Lake system) (fig. 1) impound approximately 255,000 acre-feet and 2,555 acre-feet of water, respectively.Most recharge to the Edwards aquifer occurs as seepage from streams as they cross the outcrop (recharge zone) of the aquifer (Slattery and Miller, 2017). Groundwater outflows from the Medina/Diversion Lake system have also long been recognized as a potentially important additional source of recharge. Puente (1978) published methods for estimating monthly and annual estimates of the potential recharge to the Edwards aquifer from the Medina/Diversion Lake system. During October 1995–September 1996, the USGS conducted a study to better define short-term rates of recharge and to reduce the error and uncertainty associated with estimates of monthly recharge from the Medina/Diversion Lake system (Lambert and others, 2000). As a followup to that study, Slattery and Miller (2017) published estimates of groundwater outflows from detailed water budgets for the Medina/Diversion Lake system during 1955–1964, 1995–1996, and 2001–2002. The water budgets were compiled for selected periods during which time the water-budget components were inferred to be relatively stable and the influence of precipitation, stormwater runoff, and changes in storage were presumably minimal. Linear regression analysis techniques were used by Slattery and Miller (2017) to assess the relation between the stage in Medina Lake and groundwater outflows from the Medina/Diversion Lake system.
Comparison of several methods for estimating low speed stability derivatives
NASA Technical Reports Server (NTRS)
Fletcher, H. S.
1971-01-01
Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Hand surface area estimation formula using 3D anthropometry.
Hsu, Yao-Wen; Yu, Chi-Yuang
2010-11-01
Hand surface area is an important reference in occupational hygiene and many other applications. This study derives a formula for the palm surface area (PSA) and hand surface area (HSA) based on three-dimensional (3D) scan data. Two-hundred and seventy subjects, 135 males and 135 females, were recruited for this study. The hand was measured using a high-resolution 3D hand scanner. Precision and accuracy of the scanner is within 0.67%. Both the PSA and HSA were computed using the triangular mesh summation method. A comparison between this study and previous textbook values (such as in the U.K. teaching text and Lund and Browder chart discussed in the article) was performed first to show that previous textbooks overestimated the PSA by 12.0% and HSA by 8.7% (for the male, PSA 8.5% and HSA 4.7%, and for the female, PSA 16.2% and HSA 13.4%). Six 1D measurements were then extracted semiautomatically for use as candidate estimators for the PSA and HSA estimation formula. Stepwise regressions on these six 1D measurements and variable dependency test were performed. Results show that a pair of measurements (hand length and hand breadth) were able to account for 96% of the HSA variance and up to 98% of the PSA variance. A test of the gender-specific formula indicated that gender is not a significant factor in either the PSA or HSA estimation.
Reassessment of the predatory effects of rainbow smelt on ciscoes in Lake Superior
Myers, Jared T.; Jones, Michael L.; Stockwell, Jason D.; Yule, Daniel L.
2009-01-01
Evidence from small lakes suggests that predation on larval ciscoes Coregonus artedi by nonnative rainbow smelt Osmerus mordax can lead to cisco suppression or extirpation. However, evidence from larger lakes has led to equivocal conclusions. In this study, we examine the potential predation effects of rainbow smelt in two adjacent but contrasting embayments in Lake Superior (Thunder and Black bays, Ontario). During May 2006, we sampled the ichthyoplankton, pelagic fish communities, and diet composition of rainbow smelt in both bays. Using acoustics and midwater trawling, we estimated rainbow smelt densities to be 476 ± 34/ha (mean ± SE) in Thunder Bay and 3,435 ± 460/ha in Black Bay. We used a bioenergetics model to estimate the proportion of cisco larvae consumed by rainbow smelt. Our results suggest that predation by rainbow smelt accounts for 15–52% and 37–100% of the mortality of larval ciscoes in Thunder and Black bays, respectively, depending on the predator feeding rate and the scale of predator–prey overlap. We also examined the sensitivity of past conclusions (based on 1974 field collections) to assumptions of temporal overlap between rainbow smelt and larval ciscoes and estimates of rainbow smelt abundance derived from bottom trawl samples. After adjusting these parameters to reflect current understanding, we found that the previous predation estimates may have been conservative. We conclude that rainbow smelt may have been a more important contributor to the demise and slow recovery of ciscoes in Lake Superior than previously thought.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Graphic comparison of reserve-growth models for conventional oil and accumulation
Klett, T.R.
2003-01-01
The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older and newer data. The reserve-growth model used in the 1995 USGS National Assessment and the model currently used in the NOGA project provide forecast functions that yield similar estimates of potential additions to reserves. Both models are based on the Oil and Gas Integrated Field File from the Energy Information Administration (EIA), but different vintages of data (from 1977 through 1991 and 1977 through 1996, respectively). The model based on newer data can be used in place of the previous model, providing similar estimates of potential additions to reserves. Fore-cast functions for oil fields vary little from those for gas fields in these models; therefore, a single function may be used for both oil and gas fields, like that used in the USGS World Petroleum Assessment 2000. Forecast functions based on the field-level reserve growth model derived from the NRG Associates databases (from 1982 through 1998) differ from those derived from EIA databases (from 1977 through 1996). However, the difference may not be enough to preclude the use of the forecast functions derived from NRG data in place of the forecast functions derived from EIA data. Should the model derived from NRG data be used, separate forecast functions for oil fields and gas fields must be employed. The forecast function for oil fields from the model derived from NRG data varies significantly from that for gas fields, and a single function for both oil and gas fields may not be appropriate.
Bansal, Dipanshu; Aref, Amjad; Dargush, Gary; ...
2016-07-20
Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally-derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. In this study, we illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Our results agreemore » well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bansal, Dipanshu; Aref, Amjad; Dargush, Gary
Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally-derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. In this study, we illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Our results agreemore » well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.« less
NASA Astrophysics Data System (ADS)
Johannesson, K. H.; Chevis, D.; Burdige, D. J.; Cable, J. E.; Martin, J. B.; Roy, M.
2008-12-01
Johannesson and Burdige [2007, EPSL 253, 129] suggested that submarine groundwater discharge (SGD) represents a substantial, unrecognized source of Nd to the oceans. Based on a globally averaged terrestrial SGD flux equal to 6 percent of the global river discharge and mean groundwater Nd concentrations obtained from the literature, we estimated a global SGD Nd flux that was within a factor of 2 of the previously proposed missing global Nd flux. To test our hypothesis that SGD is an important source of Nd to the oceans, rare earth element (REE) concentrations were measured in SGD samples collected beneath a coastal lagoon on the Florida Atlantic coast (Indian River Lagoon). Shale (PAAS)-normalized REE patterns for all SGD samples exhibit substantial enrichments in the heavy REEs (HREE) compared to the light REEs (LREE) as shown by their PAAS-normalized Yb/Nd ratios, which range from 5 to 73 (mean = 16). SGD from piezometers located 10 m and 22.5 m from shore exhibit PAAS-normalized REE plots that are most similar to the patterns of the overlying lagoon (surface) water. For example, mean PAAS-normalized Yb/Nd ratios for groundwaters sampled from the 10 m and 22.5 m piezometers are 6.7 and 8.3, which compare well with the PAAS- normalized Yb/Nd ratio of water column samples (8.7). In contrast, the mean PAAS-normalized Yb/Nd ratio of terrestrial-derived groundwater from the piezometer at the shoreline is 41. Neodymium concentrations of the SGD samples range from 230 to 2400 pmol/kg (mean = 507 pmol/kg), and thus are substantially higher than reported for open ocean seawater (typical Nd = 20 pmol/kg). Based on SGD fluxes previously determined with seepage meters, porewater Cl concentrations, and Rn-222 deficiencies of porewaters [Martin et al., 2007, Water Resour. Res. 43, W0544, doi: 10.1029/2006WR005266], we estimate daily inputs of Nd to the Indian River Lagoon of 50 to 2100 umoles for the terrestrial-derived component of SGD, and 171 mmoles for the marine component of SGD (81 to 3400 times greater). Residence times of Nd in the portion of the lagoon studied are estimated to range from 6 to more than 250 years based on the terrestrial-derived SGD flux of Nd, compared to 26 days using the marine-derived SGD flux of Nd. The substantially shorter residence time determined using the marine-derived SGD component compares well with the estimated flushing time for this portion of the estuary (~3 weeks). The similarity between SGD and lagoon water Nd concentrations and PAAS-normalized REE patterns, in conjunction with the larger, marine-derived SGD flux of Nd, strongly suggests that recirculation of lagoon water and subsequent SGD exerts the principal control on Nd concentrations in the lagoon. The elevated Nd concentration for deep groundwater (186 cmbsf) located 22.5 m from shore also agrees well with another study that reported recirculated, marine SGD as a source of REEs to coastal waters [Duncan and Shaw, 2003, Aquatic Geochem. 9, 233]. Thus, our observations demonstrate the importance of recirculated, marine SGD to these lagoon surface waters, and further support our hypothesis that SGD contributes substantial fluxes of Nd to the coastal oceans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frey, K.A.; Hichwa, R.D.; Ehrenkaufer, R.L.
1985-10-01
A tracer kinetic method is developed for the in vivo estimation of high-affinity radioligand binding to central nervous system receptors. Ligand is considered to exist in three brain pools corresponding to free, nonspecifically bound, and specifically bound tracer. These environments, in addition to that of intravascular tracer, are interrelated by a compartmental model of in vivo ligand distribution. A mathematical description of the model is derived, which allows determination of regional blood-brain barrier permeability, nonspecific binding, the rate of receptor-ligand association, and the rate of dissociation of bound ligand, from the time courses of arterial blood and tissue tracer concentrations.more » The term ''free receptor density'' is introduced to describe the receptor population measured by this method. The technique is applied to the in vivo determination of regional muscarinic acetylcholine receptors in the rat, with the use of (TH)scopolamine. Kinetic estimates of free muscarinic receptor density are in general agreement with binding capacities obtained from previous in vivo and in vitro equilibrium binding studies. In the striatum, however, kinetic estimates of free receptor density are less than those in the neocortex--a reversal of the rank ordering of these regions derived from equilibrium determinations. A simplified model is presented that is applicable to tracers that do not readily dissociate from specific binding sites during the experimental period.« less
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-01-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-10-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.
NASA Astrophysics Data System (ADS)
Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.
2018-06-01
We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.
Webb, Elisabeth B.; Fowler, Drew N.; Woodall, Brendan A.; Vrtiska, Mark P.
2018-01-01
Assessing nutrient stores in avian species is important for understanding the extent to which body condition influences success or failure in life‐history events. We evaluated predictive models using morphometric characteristics to estimate total body lipids (TBL) and total body protein (TBP), based on traditional proximate analyses, in spring migrating lesser snow geese (Anser caerulescens caerulescens) and Ross's geese (A. rossii). We also compared performance of our lipid model with a previously derived predictive equation for TBL developed for nesting lesser snow geese. We used external and internal measurements on 612 lesser snow and 125 Ross's geese collected during spring migration in 2015 and 2016 within the Central and Mississippi flyways to derive and evaluate predictive models. Using a validation data set, our best performing lipid model for snow geese better predicted TBL (root mean square error [RMSE] of 23.56) compared with a model derived from nesting individuals (RMSE = 48.60), suggesting the importance of season‐specific models for accurate lipid estimation. Models that included body mass and abdominal fat deposit best predicted TBL determined by proximate analysis in both species (lesser snow goose, R2 = 0.87, RMSE = 23.56: Ross's geese, R2 = 0.89, RMSE = 13.75). Models incorporating a combination of external structural measurements in addition to internal muscle and body mass best predicted protein values (R2 = 0.85, RMSE = 19.39 and R2 = 0.85, RMSE = 7.65, lesser snow and Ross's geese, respectively), but protein models including only body mass and body size were also competitive and provided extended utility to our equations for field applications. Therefore, our models indicated the importance of specimen dissection and measurement of the abdominal fat pad to provide the most accurate lipid estimates and provide alternative dissection‐free methods for estimating protein.
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
Estimating Source Duration for Moderate and Large Earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei
2017-04-01
Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).
Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William
2016-01-01
Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.
Revised techniques for estimating peak discharges from channel width in Montana
Parrett, Charles; Hull, J.A.; Omang, R.J.
1987-01-01
This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)
Dependence of muscle moment arms on in-vivo three-dimensional kinematics of the knee
Navacchia, Alessandro; Kefala, Vasiliki; Shelburne, Kevin B.
2016-01-01
Quantification of muscle moment arms is important for clinical evaluation of muscle pathology and treatment, and for estimating muscle and joint forces in musculoskeletal models. Moment arms estimated with musculoskeletal models often assume a default motion of the knee derived from measurements of passive cadaveric flexion. However, knee kinematics are unique to each person and activity. The objective of this study was to estimate moment arms of the knee muscles with in vivo subject- and activity-specific kinematics from seven healthy subjects performing seated knee extension and single-leg lunge to show changes between subjects and activities. 3D knee motion was measured with a high-speed stereo-radiography system. Moment arms of ten muscles were estimated in OpenSim by replacing the default knee motion with in vivo measurements. Estimated inter-subject moment arm variability was similar to previously reported in vitro measurements. RMS deviations up to 9.0 mm (35.2% of peak value) were observed between moment arms estimated with subject-specific knee extension and passive cadaveric motion. The degrees of freedom that most impacted inter-activity differences were superior/inferior and anterior/posterior translations. Musculoskeletal simulations used to estimate in vivo muscle forces and joint loads may provide significantly different results when subject- and activity-specific kinematics are implemented. PMID:27620064
Dependence of Muscle Moment Arms on In Vivo Three-Dimensional Kinematics of the Knee.
Navacchia, Alessandro; Kefala, Vasiliki; Shelburne, Kevin B
2017-03-01
Quantification of muscle moment arms is important for clinical evaluation of muscle pathology and treatment, and for estimating muscle and joint forces in musculoskeletal models. Moment arms estimated with musculoskeletal models often assume a default motion of the knee derived from measurements of passive cadaveric flexion. However, knee kinematics are unique to each person and activity. The objective of this study was to estimate moment arms of the knee muscles with in vivo subject- and activity-specific kinematics from seven healthy subjects performing seated knee extension and single-leg lunge to show changes between subjects and activities. 3D knee motion was measured with a high-speed stereo-radiography system. Moment arms of ten muscles were estimated in OpenSim by replacing the default knee motion with in vivo measurements. Estimated inter-subject moment arm variability was similar to previously reported in vitro measurements. RMS deviations up to 9.0 mm (35.2% of peak value) were observed between moment arms estimated with subject-specific knee extension and passive cadaveric motion. The degrees of freedom that most impacted inter-activity differences were superior/inferior and anterior/posterior translations. Musculoskeletal simulations used to estimate in vivo muscle forces and joint loads may provide significantly different results when subject- and activity-specific kinematics are implemented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, J; Zhang, L; Samei, E
Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less
Antarctic ice shelf thickness from CryoSat-2 radar altimetry
NASA Astrophysics Data System (ADS)
Chuter, Stephen; Bamber, Jonathan
2016-04-01
The Antarctic ice shelves provide buttressing to the inland grounded ice sheet, and therefore play a controlling role in regulating ice dynamics and mass imbalance. Accurate knowledge of ice shelf thickness is essential for input-output method mass balance calculations, sub-ice shelf ocean models and buttressing parameterisations in ice sheet models. Ice shelf thickness has previously been inferred from satellite altimetry elevation measurements using the assumption of hydrostatic equilibrium, as direct measurements of ice thickness do not provide the spatial coverage necessary for these applications. The sensor limitations of previous radar altimeters have led to poor data coverage and a lack of accuracy, particularly the grounding zone where a break in slope exists. We present a new ice shelf thickness dataset using four years (2011-2014) of CryoSat-2 elevation measurements, with its SARIn dual antennae mode of operation alleviating the issues affecting previous sensors. These improvements and the dense across track spacing of the satellite has resulted in ˜92% coverage of the ice shelves, with substantial improvements, for example, of over 50% across the Venable and Totten Ice Shelves in comparison to the previous dataset. Significant improvements in coverage and accuracy are also seen south of 81.5° for the Ross and Filchner-Ronne Ice Shelves. Validation of the surface elevation measurements, used to derive ice thickness, against NASA ICESat laser altimetry data shows a mean bias of less than 1 m (equivalent to less than 9 m in ice thickness) and a fourfold decrease in standard deviation in comparison to the previous continental dataset. Importantly, the most substantial improvements are found in the grounding zone. Validation of the derived thickness data has been carried out using multiple Radio Echo Sounding (RES) campaigns across the continent. Over the Amery ice shelf, where extensive RES measurements exist, the mean difference between the datasets is 3.3% and 4.7% across the whole shelf and within 10 km of the grounding line, respectively. These represent a two to three fold improvement in accuracy when compared to the previous data product. The impact of these improvements on Input-Output estimates of mass balance is illustrated for the Abbot Ice Shelf. Our new product shows a mean reduction of 29% in thickness at the grounding line when compared to the previous dataset as well as the elimination of non-physical 'data spikes' that were prevalent in the previous product in areas of complex terrain. The reduction in grounding line thickness equates to a change in mass balance for the areas from -14±9 GTyr-1to -4±9 GTyr-1. We show examples from other sectors including the Getz and George VI ice shelves. The updated estimate is more consistent with the positive surface elevation rate in this region obtained from satellite altimetry. The new thickness dataset will greatly reduce the uncertainty in Input-Output estimates of mass balance for the ˜30% of the grounding line of Antarctica where direct ice thickness measurements do not exist.
GPD+ wet tropospheric corrections for eight altimetric missions for the Sea Level ECV generation
NASA Astrophysics Data System (ADS)
Fernandes, Joana; Lázaro, Clara; Benveniste, Jérôme
2016-04-01
Due to its large spatio-temporal variability, the delay induced by the water vapour and liquid water content of the atmosphere in the altimeter signal or wet tropospheric correction (WTC) is still one of the largest sources of uncertainty in satellite altimetry. In the scope of the Sea Level (SL) Climate Change Initiative (cci) project, the University of Porto (UPorto) has been developing methods to improve the WTC (Fernandes et al., 2015). Started as a coastal algorithm to remove land effects in the microwave radiometers (MWR) on board altimeter missions, the GNSS-derived Path Delay (GPD) methodology evolved to cover the open ocean, including high latitudes, correcting for invalid observations due to land, ice and rain contamination, band instrument malfunction. The most recent version of the algorithm, GPD Plus (GPD+) computes wet path delays based on: i) WTC from the on-board MWR measurements, whenever they exist and are valid; ii) new WTC values estimated through space-time objective analysis of all available data sources, whenever the previous are considered invalid. In the estimation of the new WTC values, the following data sets are used: valid measurements from the on-board MWR, water vapour products derived from a set of 17 scanning imaging radiometers (SI-MWR) on board various remote sensing satellites and tropospheric delays derived from Global Navigation Satellite Systems (GNSS) coastal and island stations. In the estimation process, WTC derived from an atmospheric model such as the European Centre for Medium-range Weather Forecasts (ECMWF) ReAnalysis (ERA) Interim or the operational (Op) model are used as first guess, which is the adopted value in the absence of measurements. The corrections are provided for all missions used to generate the SL Essential Climate Variable (ECV): TOPEX/Poseidon- T/P, Jason-1, Jason-2, ERS-1, ERS-2, Envisat, CryoSat-2 and SARAL/ALtiKa. To ensure consistency and long term stability of the WTC datasets, the radiometers used in the GPD+ estimations have been inter-calibrated against the stable and independently-calibrated Special Sensor Microwave Imager (SSM/I) and SSMI/I Sounder (SSM/IS) sensors on-board the Defense Meteorological Satellite Program satellite series (F10, F11, F13, F14, F16 and F17). The new products reduce the sea level anomaly variance, both along-track and at crossovers with respect to previous non-calibrated versions and to other WTC data sets such as AVISO Composite (Comp) correction and atmospheric models. Improvements are particularly significant for TP and all ESA missions, especially in the coastal regions and at high latitudes. In comparison with previous GPD versions, the main impacts are on the sea level trends at decadal time scales and on regional sea level trends. For CryoSat-2, the GPD+ WTC improves the SL ECV when compared to the baseline correction from the ECMWF Op model. In view to obtain the best WTC for use in the version 2 of the SL_cci ECV, new products are under development, based on recently released on-board MWR WTC for missions such as Jason-1, Envisat and SARAL. Fernandes, M.J., Clara Lázaro, Michaël Ablain, Nelson Pires, Improved wet path delays for all ESA and reference altimetric missions, Remote Sensing of Environment, Volume 169, November 2015, Pages 50-74, ISSN 0034-4257, http://dx.doi.org/10.1016/j.rse.2015.07.023
Stern, Alan H
2005-02-01
In 2001, the U.S. Environmental Protection Agency (EPA) adopted a revised reference dose (RfD) for methyl mercury (MeHg) of 0.1 microg/kg/day. The RfD is based on neurologic developmental effects measured in children associated with exposure in utero to MeHg from the maternal diet. The RfD derivation proceeded from a point of departure based on measured concentration of mercury in fetal cord blood (micrograms per liter). The RfD, however, is a maternal dose (micrograms per kilogram per day). Reconstruction of the maternal dose corresponding to this cord blood concentration, including the variability around this estimate, is a critical step in the RfD derivation. The dose reconstruction employed by the U.S. EPA using the one-compartment pharmacokinetic model contains two areas of significant uncertainty: It does not directly account for the influence of the ratio of cord blood: maternal blood Hg concentration, and it does not resolve uncertainty regarding the most appropriate central tendency estimates for pregnancy and third-trimester-specific model parameters. A probabilistic reassessment of this dose reconstruction was undertaken to address these areas of uncertainty and generally to reconsider the specification of model input parameters. On the basis of a thorough review of the literature and recalculation of the one-compartment model including sensitivity analyses, I estimated that the 95th and 99th percentiles (i.e., the lower 5th and 1st percentiles) of the maternal intake dose corresponding to a fetal cord blood Hg concentration of 58 microg/L are 0.3 and 0.2 microg/kg/day, respectively. For the 99th percentile, this is half the value previously estimated by the U.S. EPA.
NASA Astrophysics Data System (ADS)
Molnar, S. M.; Broadhurst, T.
2017-05-01
The colliding cluster, CIZA J2242.8+5301, displays a spectacular, almost 2 Mpc long shock front with a radio based Mach number M≃ 5, that is puzzlingly large compared to the X-ray estimate of M≃ 2.5. The extent to which the X-ray temperature jump is diluted by cooler unshocked gas projected through the cluster currently lacks quantification. Here we apply our self-consistent N-body/hydrodynamical code (based on FLASH) to model this binary cluster encounter. We can account for the location of the shock front and also the elongated X-ray emission by tidal stretching of the gas and dark matter between the two cluster centers. The required total mass is 8.9× {10}14 {M}⊙ with a 1.3:1 mass ratio favoring the southern cluster component. The relative velocity we derive is ≃ 2500 {km} {{{s}}}-1 initially between the two main cluster components, with an impact parameter of 120 kpc. This solution implies that the shock temperature jump derived from the low angular resolution X-ray satellite Suzaku is underestimated by a factor of two, due to cool gas in projection, bringing the observed X-ray and radio estimates into agreement. Finally, we use our model to generate Compton-y maps to estimate the thermal Sunyaev-Zel’dovich (SZ) effect. At 30 GHz, this amounts to {{Δ }}{S}n=-0.072 mJy/arcmin2 and {{Δ }}{S}s=-0.075 mJy/arcmin2 at the locations of the northern and southern shock fronts respectively. Our model estimate agrees with previous empirical estimates that have inferred the measured radio spectra of the radio relics can be significantly affected by the SZ effect, with implications for charged particle acceleration models.
Validating SWE reconstruction using Airborne Snow Observatory measurements in the Sierra Nevada
NASA Astrophysics Data System (ADS)
Bair, N.; Rittger, K.; Davis, R. E.; Dozier, J.
2015-12-01
The Airborne Snow Observatory (ASO) program offers high resolution estimates of snow water equivalent (SWE) in several small basins across California during the melt season. Primarily, water managers use this information to model snowmelt runoff into reservoirs. Another, and potentially more impactful, use of ASO SWE measurements is in validating and improving satellite-based SWE estimates which can be used in austere regions with no ground-based snow or water measurements, such as Afghanistan's Hindu Kush. Using the entire ASO dataset to date (2013-2015) which is mostly from the Upper Tuolumne basin, but also includes measurements from 2015 in the Kings, Rush Creek, Merced, and Mammoth Lakes basins, we compare ASO measurements to those from a SWE reconstruction method. Briefly, SWE reconstruction involves downscaling energy balance forcings to compute potential melt energy, then using satellite-derived estimates of fractional snow covered area (fSCA) to estimate snow melt from potential melt. The snowpack can then be built in reverse, given a remotely-sensed date of snow disappearance (fSCA=0). Our model has improvements over previous iterations in that it: uses the full energy balance (compared to a modified degree-day) approach, models bulk and surface snow temperatures, accounts for ephemeral snow, and uses a remotely-sensed snow albedo adjusted for impurities. To check that ASO provides accurate snow measurements, we compare fSCA derived from ASO snow depth at 3 m resolution with fSCA from a spectral unmixing algorithm for LandSAT at 30 m, and from binary SCA estimates from Geoeye at 0.5 m from supervised classification. To conclude, we document how our reconstruction model has evolved over the years and provide specific examples where improvements have been made using ASO and other verification sources.
NASA Astrophysics Data System (ADS)
Marik, Thomas; Levin, Ingeborg
1996-09-01
Methane emission from livestock and agricultural wastes contribute globally more than 30% to the anthropogenic atmospheric methane source. Estimates of this number have been derived from respiration chamber experiments. We determined methane emission rates from a tracer experiment in a modern cow shed hosting 43 dairy cows in their accustomed environment. During a 24-hour period the concentrations of CH4, CO2, and SF6, a trace gas which has been released at a constant rate into the stable air, have been measured. The ratio between SF6 release rate and measured SF6 concentration was then used to estimate the ventilation rate of the stable air during the course of the experiment. The respective ratio between CH4 or CO2 and SF6 concentration together with the known SF6 release rate allows us to calculate the CH4 (and CO2) emissions in the stable. From our experiment we derive a total daily mean CH4 emission of 441 LSTP per cow (9 cows nonlactating), which is about 15% higher than previous estimates for German cows with comparable milk production obtained during respiration chamber experiments. The higher emission in our stable experiment is attributed to the contribution of CH4 release from about 50 m3 of liquid manure present in the cow shed in underground channels. Also, considering measurements we made directly on a liquid manure tank, we obtained an estimate of the total CH4 production from manure: The normalized contribution of methane from manure amounts to 12-30% of the direct methane release of a dairy cow during rumination. The total CH4 release per dairy cow, including manure, is 521-530 LSTP CH4 per day.
Prognosis of hepatitis C virus-infected Canadian post-transfusion compensation claimant cohort.
Thein, H-H; Yi, Q; Heathcote, E J; Krahn, M D
2009-11-01
Accurate prognostic estimates were required to ensure the sufficiency of the $1.1 billion compensation fund established in 1998 to compensate Canadians who acquired hepatitis C virus (HCV) infection through blood transfusion between 1986 and 1990. This article reports the application of Markov modelling and epidemiological methods to estimate the prognosis of individuals who have claimed compensation. Clinical characteristics of the claimant cohort (n = 5004) were used to define the starting distribution. Annual stage-specific transition probabilities (F0-->F1, . . ., F3-->F4) were derived from the claimants, using the Markov maximum likelihood estimation method. HCV treatment efficacy was derived from the literature and practice patterns were estimated from a national survey. The estimated stage-specific transition probabilities of the cohort between F0-->F1, F1-->F2, F2-->F3 and F3-->F4 were 0.032, 0.137, 0.150 and 0.097 respectively. At 20 years after the index transfusion, approximately 10% of all living claimants (n = 3773) had cirrhosis and 0.5% developed hepatocellular carcinoma (HCC). For nonhaemophilic patients, the predicted 20-year (2030) risk of HCV-related cirrhosis was 23%, and the risk of HCC and liver-related death was 7% and 11% respectively. Haemophilic patients who are younger and are frequently co-infected with human immunodeficiency virus would have higher 20-year risks of cirrhosis (37%), HCC (12%) and liver-related death (19%). Our results indicate that rates of progression to advanced liver disease in post-transfusion cohorts may be lower than previously reported. The Canadian post-transfusion cohort offers new and relevant prognostic information for post-transfusion HCV patients in Canada and is an invaluable resource to study the natural history and resource utilization of HCV-infected individuals in future studies.
NASA Astrophysics Data System (ADS)
Salvucci, G.; Rigden, A. J.
2015-12-01
Daily time series of evapotranspiration and surface conductance to water vapor were estimated using the ETRHEQ method (Evapotranspiration from Relative Humidity at Equilibrium). ETRHEQ has been previously compared with ameriflux site-level measurements of ET at daily and seasonal time scales, with watershed water balance estimates, and with various benchmark ET data sets. The ETRHEQ method uses meteorological data collected at common weather stations and estimates the surface conductance by minimizing the vertical variance of the calculated relative humidity profile averaged over the day. The key advantage of the ETRHEQ method is that it does not require knowledge of the surface state (soil moisture, stomatal conductance, leaf are index, etc.) or site-specific calibration. The daily estimates of conductance from 229 weather stations for 53 years were analyzed for dependence on environmental variables known to impact stomatal conductance and soil diffusivity: surface temperature, surface vapor pressure deficit, solar radiation, antecedent precipitation (as a surrogate for soil moisture), and a seasonal vegetation greenness index. At each site the summertime (JJAS) conductance values estimated from ETRHEQ were fitted to a multiplicate Jarvis-type stress model. Functional dependence was not proscribed, but instead fitted using flexible piecewise-linear splines. The resulting stress functions reproduce the time series of conductance across a wide range of ecosystems and climates. The VPD stress term resembles that proposed by Oren (i.e., 1-m*log(VPD) ), with VPD measured in kilopascals. The equivalent value of m derived from our spline-fits at each station varied over a remarkably small range of 0.58 to 0.62, in agreement with Oren's original analysis based on leaf and tree-level measurements.
Vavougios, George D; Doskas, Triantafyllos; Konstantopoulos, Kostas
2018-05-01
Dysarthrophonia is a predominant symptom in many neurological diseases, affecting the quality of life of the patients. In this study, we produced a discriminant function equation that can differentiate MS patients from healthy controls, using electroglottographic variables not analyzed in a previous study. We applied stepwise linear discriminant function analysis in order to produce a function and score derived from electroglottographic variables extracted from a previous study. The derived discriminant function's statistical significance was determined via Wilk's λ test (and the associated p value). Finally, a 2 × 2 confusion matrix was used to determine the function's predictive accuracy, whereas the cross-validated predictive accuracy is estimated via the "leave-one-out" classification process. Discriminant function analysis (DFA) was used to create a linear function of continuous predictors. DFA produced the following model (Wilk's λ = 0.043, χ2 = 388.588, p < 0.0001, Tables 3 and 4): D (MS vs controls) = 0.728*DQx1 mean monologue + 0.325*CQx monologue + 0.298*DFx1 90% range monologue + 0.443*DQx1 90% range reading - 1.490*DQx1 90% range monologue. The derived discriminant score (S1) was used subsequently in order to form the coordinates of a ROC curve. Thus, a cutoff score of - 0.788 for S1 corresponded to a perfect classification (100% sensitivity and 100% specificity, p = 1.67e -22 ). Consistent with previous findings, electroglottographic evaluation represents an easy to implement and potentially important assessment in MS patients, achieving adequate classification accuracy. Further evaluation is needed to determine its use as a biomarker.
Benchmark Dose for Urinary Cadmium based on a Marker of Renal Dysfunction: A Meta-Analysis
Woo, Hae Dong; Chiu, Weihsueh A.; Jo, Seongil; Kim, Jeongseon
2015-01-01
Background Low doses of cadmium can cause adverse health effects. Benchmark dose (BMD) and the one-sided 95% lower confidence limit of BMD (BMDL) to derive points of departure for urinary cadmium exposure have been estimated in several previous studies, but the methods to derive BMD and the estimated BMDs differ. Objectives We aimed to find the associated factors that affect BMD calculation in the general population, and to estimate the summary BMD for urinary cadmium using reported BMDs. Methods A meta-regression was performed and the pooled BMD/BMDL was estimated using studies reporting a BMD and BMDL, weighted by sample size, that were calculated from individual data based on markers of renal dysfunction. Results BMDs were highly heterogeneous across studies. Meta-regression analysis showed that a significant predictor of BMD was the cut-off point which denotes an abnormal level. Using the 95th percentile as a cut off, BMD5/BMDL5 estimates for 5% benchmark responses (BMR) of β2-microglobulinuria (β2-MG) estimated was 6.18/4.88 μg/g creatinine in conventional quantal analysis and 3.56/3.13 μg/g creatinine in the hybrid approach, and BMD5/BMDL5 estimates for 5% BMR of N-acetyl-β-d-glucosaminidase (NAG) was 10.31/7.61 μg/g creatinine in quantal analysis and 3.21/2.24 g/g creatinine in the hybrid approach. However, the meta-regression showed that BMD and BMDL were significantly associated with the cut-off point, but BMD calculation method did not significantly affect the results. The urinary cadmium BMDL5 of β2-MG was 1.9 μg/g creatinine in the lowest cut-off point group. Conclusion The BMD was significantly associated with the cut-off point defining the abnormal level of renal dysfunction markers. PMID:25970611
NASA Astrophysics Data System (ADS)
Tangdamrongsub, Natthachet; Han, Shin-Chan; Decker, Mark; Yeo, In-Young; Kim, Hyungjun
2018-03-01
An accurate estimation of soil moisture and groundwater is essential for monitoring the availability of water supply in domestic and agricultural sectors. In order to improve the water storage estimates, previous studies assimilated terrestrial water storage variation (ΔTWS) derived from the Gravity Recovery and Climate Experiment (GRACE) into land surface models (LSMs). However, the GRACE-derived ΔTWS was generally computed from the high-level products (e.g. time-variable gravity fields, i.e. level 2, and land grid from the level 3 product). The gridded data products are subjected to several drawbacks such as signal attenuation and/or distortion caused by a posteriori filters and a lack of error covariance information. The post-processing of GRACE data might lead to the undesired alteration of the signal and its statistical property. This study uses the GRACE least-squares normal equation data to exploit the GRACE information rigorously and negate these limitations. Our approach combines GRACE's least-squares normal equation (obtained from ITSG-Grace2016 product) with the results from the Community Atmosphere Biosphere Land Exchange (CABLE) model to improve soil moisture and groundwater estimates. This study demonstrates, for the first time, an importance of using the GRACE raw data. The GRACE-combined (GC) approach is developed for optimal least-squares combination and the approach is applied to estimate the soil moisture and groundwater over 10 Australian river basins. The results are validated against the satellite soil moisture observation and the in situ groundwater data. Comparing to CABLE, we demonstrate the GC approach delivers evident improvement of water storage estimates, consistently from all basins, yielding better agreement on seasonal and inter-annual timescales. Significant improvement is found in groundwater storage while marginal improvement is observed in surface soil moisture estimates.
Sieradzka, Dominika; Power, Robert A; Freeman, Daniel; Cardno, Alastair G; Dudbridge, Frank; Ronald, Angelica
2015-09-01
Occurrence of psychotic experiences is common amongst adolescents in the general population. Twin studies suggest that a third to a half of variance in adolescent psychotic experiences is explained by genetic influences. Here we test the extent to which common genetic variants account for some of the twin-based heritability. Psychotic experiences were assessed with the Specific Psychotic Experiences Questionnaire in a community sample of 2152 16-year-olds. Self-reported measures of Paranoia, Hallucinations, Cognitive Disorganization, Grandiosity, Anhedonia, and Parent-rated Negative Symptoms were obtained. Estimates of SNP heritability were derived and compared to the twin heritability estimates from the same sample. Three approaches to genome-wide restricted maximum likelihood (GREML) analyses were compared: (1) standard GREML performed on full genome-wide data; (2) GREML stratified by minor allele frequency (MAF); and (3) GREML performed on pruned data. The standard GREML revealed a significant SNP heritability of 20 % for Anhedonia (SE = 0.12; p < 0.046) and an estimate of 19 % for Cognitive Disorganization, which was close to significant (SE = 0.13; p < 0.059). Grandiosity and Paranoia showed modest SNP heritability estimates (17 %; SE = 0.13 and 14 %; SE = 0.13, respectively, both n.s.), and zero estimates were found for Hallucinations and Negative Symptoms. The estimates for Anhedonia, Cognitive Disorganization and Grandiosity accounted for approximately half the previously reported twin heritability. SNP heritability estimates from the MAF-stratified approach were mostly consistent with the standard estimates and offered additional information about the distribution of heritability across the MAF range of the SNPs. In contrast, the estimates derived from the pruned data were for the most part not consistent with the other two approaches. It is likely that the difference seen in the pruned estimates was driven by the loss of tagged causal variants, an issue fundamental to this approach. The current results suggest that common genetic variants play a role in the etiology of some adolescent psychotic experiences, however further research on larger samples is desired and the use of MAF-stratified approach recommended.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
NASA Astrophysics Data System (ADS)
Xiao, Lu; Lang, Yichao; Christakos, George
2018-01-01
With rapid economic development, industrialization and urbanization, the ambient air PM2.5 has become a major pollutant linked to respiratory, heart and lung diseases. In China, PM2.5 pollution constitutes an extreme environmental and social problem of widespread public concern. In this work we estimate ground-level PM2.5 from satellite-derived aerosol optical depth (AOD), topography data, meteorological data, and pollutant emission using an integrative technique. In particular, Geographically Weighted Regression (GWR) analysis was combined with Bayesian Maximum Entropy (BME) theory to assess the spatiotemporal characteristics of PM2.5 exposure in a large region of China and generate informative PM2.5 space-time predictions (estimates). It was found that, due to its integrative character, the combined BME-GWR method offers certain improvements in the space-time prediction of PM2.5 concentrations over China compared to previous techniques. The combined BME-GWR technique generated realistic maps of space-time PM2.5 distribution, and its performance was superior to that of seven previous studies of satellite-derived PM2.5 concentrations in China in terms of prediction accuracy. The purely spatial GWR model can only be used at a fixed time, whereas the integrative BME-GWR approach accounts for cross space-time dependencies and can predict PM2.5 concentrations in the composite space-time domain. The 10-fold results of BME-GWR modeling (R2 = 0.883, RMSE = 11.39 μg /m3) demonstrated a high level of space-time PM2.5 prediction (estimation) accuracy over China, revealing a definite trend of severe PM2.5 levels from the northern coast toward inland China (Nov 2015-Feb 2016). Future work should focus on the addition of higher resolution AOD data, developing better satellite-based prediction models, and related air pollutants for space-time PM2.5 prediction purposes.
NASA Technical Reports Server (NTRS)
Cannon, I.; Balcer, S.; Cochran, M.; Klop, J.; Peterson, S.
1991-01-01
An Integrated Control and Health Monitoring (ICHM) system was conceived for use on a 20 Klb thrust baseline Orbit Transfer Vehicle (OTV) engine. Considered for space used, the ICHM was defined for reusability requirements for an OTV engine service free life of 20 missions, with 100 starts and a total engine operational time of 4 hours. Functions were derived by flowing down requirements from NASA guidelines, previous OTV engine or ICHM documents, and related contracts. The elements of an ICHM were identified and listed, and these elements were described in sufficient detail to allow estimation of their technology readiness levels. These elements were assessed in terms of technology readiness level, and supporting rationale for these assessments presented. The remaining cost for development of a minimal ICHM system to technology readiness level 6 was estimated. The estimates are within an accuracy range of minus/plus 20 percent. The cost estimates cover what is needed to prepare an ICHM system for use on a focussed testbed for an expander cycle engine, excluding support to the actual test firings.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.
Fawsitt, Christopher G.; Bourke, Jane; Greene, Richard A.; Everard, Claire M.; Murphy, Aileen; Lutomski, Jennifer E.
2013-01-01
Background Elective repeat caesarean delivery (ERCD) rates have been increasing worldwide, thus prompting obstetric discourse on the risks and benefits for the mother and infant. Yet, these increasing rates also have major economic implications for the health care system. Given the dearth of information on the cost-effectiveness related to mode of delivery, the aim of this paper was to perform an economic evaluation on the costs and short-term maternal health consequences associated with a trial of labour after one previous caesarean delivery compared with ERCD for low risk women in Ireland. Methods Using a decision analytic model, a cost-effectiveness analysis (CEA) was performed where the measure of health gain was quality-adjusted life years (QALYs) over a six-week time horizon. A review of international literature was conducted to derive representative estimates of adverse maternal health outcomes following a trial of labour after caesarean (TOLAC) and ERCD. Delivery/procedure costs derived from primary data collection and combined both “bottom-up” and “top-down” costing estimations. Results Maternal morbidities emerged in twice as many cases in the TOLAC group than the ERCD group. However, a TOLAC was found to be the most-effective method of delivery because it was substantially less expensive than ERCD (€1,835.06 versus €4,039.87 per women, respectively), and QALYs were modestly higher (0.84 versus 0.70). Our findings were supported by probabilistic sensitivity analysis. Conclusions Clinicians need to be well informed of the benefits and risks of TOLAC among low risk women. Ideally, clinician-patient discourse would address differences in length of hospital stay and postpartum recovery time. While it is premature advocate a policy of TOLAC across maternity units, the results of the study prompt further analysis and repeat iterations, encouraging future studies to synthesis previous research and new and relevant evidence under a single comprehensive decision model. PMID:23484038
Estimating irrigation water demand in the Moroccan Drâa Valley using contingent valuation.
Storm, Hugo; Heckelei, Thomas; Heidecke, Claudia
2011-10-01
Irrigation water management is crucial for agricultural production and livelihood security in Morocco as in many other parts of the world. For the implementation of an effective water management, knowledge about farmers' demand for irrigation water is crucial to assess reactions to water pricing policy, to establish a cost-benefit analysis of water supply investments or to determine the optimal water allocation between different users. Previously used econometric methods providing this information often have prohibitive data requirements. In this paper, the Contingent Valuation Method (CVM) is adjusted to derive a demand function for irrigation water along farmers' willingness to pay for one additional unit of surface water or groundwater. An application in the Middle Drâa Valley in Morocco shows that the method provides reasonable results in an environment with limited data availability. For analysing the censored survey data, the Least Absolute Deviation estimator was found to be a more suitable alternative to the Tobit model as errors are heteroscedastic and non-normally distributed. The adjusted CVM to derive demand functions is especially attractive for water scarce countries under limited data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Estimating moisture transport over oceans using space-based observations
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Wenqing, Tang
2005-01-01
The moisture transport integrated over the depth of the atmosphere (0) is estimated over oceans using satellite data. The transport is the product of the precipitable water and an equivalent velocity (ue), which, by definition, is the depth-averaged wind velocity weighted by humidity. An artificial neural network is employed to construct a relation between the surface wind velocity measured by the spaceborne scatterometer and coincident ue derived using humidity and wind profiles measured by rawinsondes and produced by reanalysis of operational numerical weather prediction (NWP). On the basis of this relation, 0 fields are produced over global tropical and subtropical oceans (40_N- 40_S) at 0.25_ latitude-longitude and twice daily resolutions from August 1999 to December 2003 using surface wind vector from QuikSCAT and precipitable water from the Tropical Rain Measuring Mission. The derived ue were found to capture the major temporal variability when compared with radiosonde measurements. The average error over global oceans, when compared with NWP data, was comparable with the instrument accuracy specification of space-based scatterometers. The global distribution exhibits the known characteristics of, and reveals more detailed variability than in, previous data.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
Rate constants for the reactions of OH with CH3Cl, CH2Cl2, CHCl3, and CH3Br
NASA Technical Reports Server (NTRS)
Hsu, K.-J.; Demore, W. B.
1994-01-01
Rate constants for the reactions of OH with CH3Cl, CH2Cl2, CHCl3, and CH3Br have been measured by a relative rate technique in which the reaction rate of each compound was compared to that of HFC-152a (CH3CHF2) and (for CH2Cl2) HFC-161 (CH3CH2F). Using absolute rate constants for HFC-152a and HFC-161, which we have determined relative to those for CH4, CH3CCl3, and C2H6, temperature dependent rate constants of both compounds were derived. The derived rate constant for CH3Br is in good agreement with recent absolute measurements. However, for the chloromethanes all the rate constants are lower at atmospheric temperatures than previously reported, especially for CH2Cl2 where the present rate constant is about a factor of 1.6 below the JPL 92-20 value. The new rate constant appears to resolve a discrepancy between the observed atmospheric concentrations and those calculated from the previous rate constant and estimated release rates.
NASA Astrophysics Data System (ADS)
Varela, Augusto N.; Raigemborn, M. Sol; Richiano, Sebastián; White, Tim; Poiré, Daniel G.; Lizzoli, Sabrina
2018-01-01
Although there is general consensus that a global greenhouse climate characterized the mid-Cretaceous, details of the climate state of the mid-Cretaceous Southern Hemisphere are less clearly understood. In particular, continental paleoclimate reconstructions are scarce and exclusively derived from paleontological records. Using paleosol-derived climofunction studies of the mid- to Upper Cretaceous Mata Amarilla Formation, southern Patagonia, Argentina, we present a reconstruction of the mid-Cretaceous climate of southern South America. Our results indicate that at 60° south paleolatitude during the Cenomanian-Santonian stages, the climate was subtropical temperate-warm (12 °C ± 2.1 °C) and humid (1404 ± 108 mm/yr) with marked rainfall seasonality. These results are consistent with both previous estimations from the fossil floras of the Mata Amarilla Formation and other units of the Southern Hemisphere, and with the previous observations of the displacement of tropical and subtropical floras towards the poles in both hemispheres. The data presented here show a more marked seasonality and slightly lower mean annual precipitation and mean annual temperature values than those recorded at the same paleolatitudes in the Northern Hemisphere.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Analysis and modeling of a hail event consequences on a building portfolio
NASA Astrophysics Data System (ADS)
Nicolet, Pierrick; Voumard, Jérémie; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel
2014-05-01
North-West Switzerland has been affected by a severe Hail Storm in July 2011, which was especially intense in the Canton of Aargau. The damage cost of this event is around EUR 105 Million only for the Canton of Aargau, which corresponds to half of the mean annual consolidated damage cost of the last 20 years for the 19 Cantons (over 26) with a public insurance. The aim of this project is to benefit from the collected insurance data to better understand and estimate the risk of such event. In a first step, a simple hail event simulator, which has been developed for a previous hail episode, is modified. The geometric properties of the storm is derived from the maximum intensity radar image by means of a set of 2D Gaussians instead of using 1D Gaussians on profiles, as it was the case in the previous version. The tool is then tested on this new event in order to establish its ability to give a fast damage estimation based on the radar image and buildings value and location. The geometrical properties are used in a further step to generate random outcomes with similar characteristics, which are combined with a vulnerability curve and an event frequency to estimate the risk. The vulnerability curve comes from a 2009 event and is improved with data from this event, whereas the frequency for the Canton is estimated from insurance records. In addition to this regional risk analysis, this contribution aims at studying the relation of the buildings orientation with the damage rate. Indeed, it is expected that the orientation of the roof influences the aging of the material by controlling the frequency and amplitude of thaw-freeze cycles, changing then the vulnerability over time. This part is established by calculating the hours of sunshine, which are used to derive the material temperatures. This information is then compared with insurance claims. A last part proposes a model to study the hail impact on a building, by modeling the different equipment on each facade of the building, such as the number of windows or the material type. The goal for this part, which is more prospective, is to have a model which would allow to quickly estimate the risk of a given building according to its physical characteristics and to the local wind conditions during a hail event.
Osmium isotope and highly siderophile element systematics of the lunar crust
NASA Astrophysics Data System (ADS)
Day, James M. D.; Walker, Richard J.; James, Odette B.; Puchtel, Igor S.
2010-01-01
Coupled 187Os/ 188Os and highly siderophile element (HSE: Os, Ir, Ru, Pt, Pd, and Re) abundance data are reported for pristine lunar crustal rocks 60025, 62255, 65315 (ferroan anorthosites, FAN) and 76535, 78235, 77215 and a norite clast in 15455 (magnesian-suite rocks, MGS). Osmium isotopes permit more refined discrimination than previously possible of samples that have been contaminated by meteoritic additions and the new results show that some rocks, previously identified as pristine, contain meteorite-derived HSE. Low HSE abundances in FAN and MGS rocks are consistent with derivation from a strongly HSE-depleted lunar mantle. At the time of formation, the lunar floatation crust, represented by FAN, had 1.4 ± 0.3 pg g - 1 Os, 1.5 ± 0.6 pg g - 1 Ir, 6.8 ± 2.7 pg g - 1 Ru, 16 ± 15 pg g - 1 Pt, 33 ± 30 pg g - 1 Pd and 0.29 ± 0.10 pg g - 1 Re (˜ 0.00002 × CI) and Re/Os ratios that were modestly elevated ( 187Re/ 188Os = 0.6 to 1.7) relative to CI chondrites. MGS samples are, on average, characterised by more elevated HSE abundances (˜ 0.00007 × CI) compared with FAN. This either reflects contrasting mantle-source HSE characteristics of FAN and MGS rocks, or different mantle-crust HSE fractionation behaviour during production of these lithologies. Previous studies of lunar impact-melt rocks have identified possible elevated Ru and Pd in lunar crustal target rocks. The new results provide no supporting evidence for such enrichments. If maximum estimates for HSE in the lunar mantle are compared with FAN and MGS averages, crust-mantle concentration ratios ( D-values) must be ≤ 0.3. Such D-values are broadly similar to those estimated for partitioning between the terrestrial crust and upper mantle, with the notable exception of Re. Given the presumably completely different mode of origin for the primary lunar floatation crust and tertiary terrestrial continental crust, the potential similarities in crust-mantle HSE partitioning for the Earth and Moon are somewhat surprising. Low HSE abundances in the lunar crust, coupled with estimates of HSE concentrations in the lunar mantle implies there may be a 'missing component' of late-accreted materials (as much as 95%) to the Moon if the Earth/Moon mass-flux estimates are correct and terrestrial mantle HSE abundances were established by late accretion.
Osmium isotope and highly siderophile element systematics of the lunar crust
Day, J.M.D.; Walker, R.J.; James, O.B.; Puchtel, I.S.
2010-01-01
Coupled 187Os/188Os and highly siderophile element (HSE: Os, Ir, Ru, Pt, Pd, and Re) abundance data are reported for pristine lunar crustal rocks 60025, 62255, 65315 (ferroan anorthosites, FAN) and 76535, 78235, 77215 and a norite clast in 15455 (magnesian-suite rocks, MGS). Osmium isotopes permit more refined discrimination than previously possible of samples that have been contaminated by meteoritic additions and the new results show that some rocks, previously identified as pristine, contain meteorite-derived HSE. Low HSE abundances in FAN and MGS rocks are consistent with derivation from a strongly HSE-depleted lunar mantle. At the time of formation, the lunar floatation crust, represented by FAN, had 1.4 ?? 0.3 pg g- 1 Os, 1.5 ?? 0.6 pg g- 1 Ir, 6.8 ?? 2.7 pg g- 1 Ru, 16 ?? 15 pg g- 1 Pt, 33 ?? 30 pg g- 1 Pd and 0.29 ?? 0.10 pg g- 1 Re (??? 0.00002 ?? CI) and Re/Os ratios that were modestly elevated (187Re/188Os = 0.6 to 1.7) relative to CI chondrites. MGS samples are, on average, characterised by more elevated HSE abundances (??? 0.00007 ?? CI) compared with FAN. This either reflects contrasting mantle-source HSE characteristics of FAN and MGS rocks, or different mantle-crust HSE fractionation behaviour during production of these lithologies. Previous studies of lunar impact-melt rocks have identified possible elevated Ru and Pd in lunar crustal target rocks. The new results provide no supporting evidence for such enrichments. If maximum estimates for HSE in the lunar mantle are compared with FAN and MGS averages, crust-mantle concentration ratios (D-values) must be ??? 0.3. Such D-values are broadly similar to those estimated for partitioning between the terrestrial crust and upper mantle, with the notable exception of Re. Given the presumably completely different mode of origin for the primary lunar floatation crust and tertiary terrestrial continental crust, the potential similarities in crust-mantle HSE partitioning for the Earth and Moon are somewhat surprising. Low HSE abundances in the lunar crust, coupled with estimates of HSE concentrations in the lunar mantle implies there may be a 'missing component' of late-accreted materials (as much as 95%) to the Moon if the Earth/Moon mass-flux estimates are correct and terrestrial mantle HSE abundances were established by late accretion. ?? 2009 Elsevier B.V. All rights reserved.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
NASA Astrophysics Data System (ADS)
McAlpin, D. B.; Meyer, F. J.; Webley, P. W.
2017-12-01
Using thermal data from Advanced Very High Resolution Radiometer (AVHRR) sensors, we investigated algorithms to estimate the effusive volume of lava flows from the 2012-13 eruption of Tolbachik Volcano with high temporal resolution. AVHRR are polar orbiting, radiation detection instruments that provide reflectance and radiance data in six spectral bands with a ground resolution of 1.1 km². During the Tolbachik eruption of 2012-13, active AVHRR instruments were available aboard four polar orbiting platforms. Although the primary purpose of the instruments is climate and ocean studies, their multiple platforms provide global coverage at least twice daily, with data for all regions of the earth no older than six hours. This frequency makes the AVHRR instruments particularly suitable for the study of volcanic activity. While methods for deriving effusion rates from thermal observations have been previously published, a number of topics complicate their practical application. In particular, these include (1) unknown material parameters used in the estimation process; (2) relatively coarse resolution of thermal sensors; (3) optimizing a model to describe the number of thermal regimes within each pixel and (4) frequent saturation issues in thermal channels. We present ongoing investigations into effusion rate estimation from AVHRR data using the 2012-13 eruption of Tolbachik Volcano as a test event. For this eruption we studied approaches for coping with issues (1) - (4) to pave the way to a more operational implementation of published techniques. To address (1), we used Monte Carlo simulations to understand the sensitivity of effusion rate estimates to changes in material parameters. To study (2) and (3) we compared typical two-component (exposed lava on ambient background) and three-component models (exposed lava, cooled crust, ambient background) for their relative performance. To study issue (4), we compared AVHRR-derived effusion rates to reference data derived from multi-temporal digital elevation models. In our workflow, we correct for scan angle of the sensor and the transmissivity of the atmosphere before including include corrected temperatures in heat equations to determine the effusion volume necessary to satisfy the equations.
NASA Technical Reports Server (NTRS)
Stauffer, John R.; Schild, Rudolph A.; Baliunas, Sallie L.; Africano, John L.
1987-01-01
Light curves and period estimates were obtained for several Pleiades and Alpha Persei cluster K dwarfs which were identified as rapid rotators in earlier spectroscopic studies. A few of the stars have previously-published light curves, making it possible to study the long-term variability of the light-curve shapes. The general cause of the photometric variability observed for these stars is an asymmetric distribution of photospheric inhomogeneities (starspots). The presence of these inhomogeneities combined with the rotation of the star lead to the light curves observed. The photometric periods derived are thus identified with the rotation period of the star, making it possible to estimate equatorial rotational velocities for these K dwarfs. These data are of particular importance because the clusters are sufficiently young that stars of this mass should have just arrived on the main sequence. These data could be used to estimate the temperatures and sizes of the spot groups necessary to produce the observed light curves for these stars.
Vertical variation of ice particle size in convective cloud tops.
van Diedenhoven, Bastiaan; Fridlind, Ann M; Cairns, Brian; Ackerman, Andrew S; Yorks, John E
2016-05-16
A novel technique is used to estimate derivatives of ice effective radius with respect to height near convective cloud tops ( dr e / dz ) from airborne shortwave reflectance measurements and lidar. Values of dr e / dz are about -6 μ m/km for cloud tops below the homogeneous freezing level, increasing to near 0 μ m/km above the estimated level of neutral buoyancy. Retrieved dr e / dz compares well with previously documented remote sensing and in situ estimates. Effective radii decrease with increasing cloud top height, while cloud top extinction increases. This is consistent with weaker size sorting in high, dense cloud tops above the level of neutral buoyancy where fewer large particles are present, and with stronger size sorting in lower cloud tops that are less dense. The results also confirm that cloud-top trends of effective radius can generally be used as surrogates for trends with height within convective cloud tops. These results provide valuable observational targets for model evaluation.
Vertical Variation of Ice Particle Size in Convective Cloud Tops
NASA Technical Reports Server (NTRS)
Van Diedenhoven, Bastiaan; Fridlind, Ann M.; Cairns, Brian; Ackerman, Andrew S.; Yorks, John E.
2016-01-01
A novel technique is used to estimate derivatives of ice effective radius with respect to height near convective cloud tops (dr(sub e)/dz) from airborne shortwave reflectance measurements and lidar. Values of dr(sub e)/dz are about -6 micrometer/km for cloud tops below the homogeneous freezing level, increasing to near 0 micrometer/km above the estimated level of neutral buoyancy. Retrieved dr(sub e)/dz compares well with previously documented remote sensing and in situ estimates. Effective radii decrease with increasing cloud top height, while cloud top extinction increases. This is consistent with weaker size sorting in high, dense cloud tops above the level of neutral buoyancy where fewer large particles are present and with stronger size sorting in lower cloud tops that are less dense. The results also confirm that cloud top trends of effective radius can generally be used as surrogates for trends with height within convective cloud tops. These results provide valuable observational targets for model evaluation.
CMB internal delensing with general optimal estimator for higher-order correlations
Namikawa, Toshiya
2017-05-24
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
Filling the white space on maps of European runoff trends: estimates from a multi-model ensemble
NASA Astrophysics Data System (ADS)
Stahl, K.; Tallaksen, L. M.; Hannaford, J.; van Lanen, H. A. J.
2012-02-01
An overall appraisal of runoff changes at the European scale has been hindered by "white space" on maps of observed trends due to a paucity of readily-available streamflow data. This study tested whether this white space can be filled using estimates of trends derived from model simulations of European runoff. The simulations stem from an ensemble of eight global hydrological models that were forced with the same climate input for the period 1963-2000. A validation of the derived trends for 293 grid cells across the European domain with observation-based trend estimates, allowed an assessment of the uncertainty of the modelled trends. The models agreed on the predominant continental scale patterns of trends, but disagreed on magnitudes and even on trend directions at the transition between regions with increasing and decreasing runoff trends, in complex terrain with a high spatial variability, and in snow-dominated regimes. Model estimates appeared most reliable in reproducing trends in annual runoff, winter runoff, and 7-day high flow. Modelled trends in runoff during the summer months, spring (for snow influenced regions) and autumn, and trends in summer low flow, were more variable and should be viewed with caution due to higher uncertainty. The ensemble mean overall provided the best representation of the trends in the observations. Maps of trends in annual runoff based on the ensemble mean demonstrated a pronounced continental dipole pattern of positive trends in western and northern Europe and negative trends in southern and parts of Eastern Europe, which has not previously been demonstrated and discussed in comparable detail.
Zayed, Amro; Whitfield, Charles W.
2008-01-01
Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating “Africanized” honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (FST) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher FST estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852–1,371 genes or ≈10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between FST estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee. PMID:18299560
Zayed, Amro; Whitfield, Charles W
2008-03-04
Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating "Africanized" honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (F(ST)) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher F(ST) estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852-1,371 genes or approximately 10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between F(ST) estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee.
Final STS-35 Columbia descent BET products and results for LaRC OEX investigations
NASA Technical Reports Server (NTRS)
Oakes, Kevin F.; Findlay, John T.; Jasinski, Rachel A.; Wood, James S.
1991-01-01
Final STS-35 'Columbia' descent Best Estimate Trajectory (BET) products have been developed for Langley Research Center (LaRC) Orbiter Experiments (OEX) investigations. Included are the reconstructed inertial trajectory profile; the Extended BET, which combines the inertial data and, in this instance, the National Weather Service atmospheric information obtained via Johnson Space Center; and the Aerodynamic BET. The inertial BET utilized Inertial Measurement Unit 1 (IMU1) dynamic measurements for deterministic propagation during the ENTREE estimation process. The final estimate was based on the considerable ground based C-band tracking coverage available as well as Tracking Data and Relay Satellite System (TDRSS) Doppler data, a unique use of the latter for endo-atmospheric flight determinations. The actual estimate required simultaneous solutions for the spacecraft position and velocity, spacecraft attitude, and six IMU parameters - three gyro biases and three accelerometer scale factor correction terms. The anchor epoch for this analysis was 19,200 Greenwich Mean Time (GMT) seconds which corresponds to an initial Shuttle altitude of approximately 513 kft. The atmospheric data incorporated were evaluated based on Shuttle derived considerations as well as comparisons with other models. The AEROBET was developed based on the Extended BET, the measured spacecraft configuration information, final mass properties, and the final Orbiter preoperation databook. The latter was updated based on aerodynamic consensus incrementals derived by the latest published FAD. The rectified predictions were compared versus the flight computed values and the resultant differences were correlated versus ensemble results for twenty-two previous STS entry flights.
Cross-seasonal effects on waterfowl productivity: Implications under climate change
Osnas, Erik; Zhao, Qing; Runge, Michael C.; Boomer, G Scott
2016-01-01
Previous efforts to relate winter-ground precipitation to subsequent reproductive success as measured by the ratio of juveniles to adults in the autumn failed to account for increased vulnerability of juvenile ducks to hunting and uncertainty in the estimated age ratio. Neglecting increased juvenile vulnerability will positively bias the mean productivity estimate, and neglecting increased vulnerability and estimation uncertainty will positively bias the year-to-year variance in productivity because raw age ratios are the product of sampling variation, the year-specific vulnerability, and year-specific reproductive success. Therefore, we estimated the effects of cumulative winter precipitation in the California Central Valley and the Mississippi Alluvial Valley on pintail (Anas acuta) and mallard (Anas platyrhnchos) reproduction, respectively, using hierarchical Bayesian methods to correct for sampling bias in productivity estimates and observation error in covariates. We applied the model to a hunter-collected parts survey implemented by the United States Fish and Wildlife Service and band recoveries reported to the United States Geological Survey Bird Banding Laboratory using data from 1961 to 2013. We compared our results to previous estimates that used simple linear regression on uncorrected age ratios from a smaller subset of years in pintail (1961–1985). Like previous analyses, we found large and consistent effects of population size and wetland conditions in prairie Canada on mallard productivity, and large effects of population size and mean latitude of the observed breeding population on pintail productivity. Unlike previous analyses, we report a large amount of uncertainty in the estimated effects of wintering-ground precipitation on pintail and mallard productivity, with considerable uncertainty in the sign of the estimated main effect, although the posterior medians of precipitation effects were consistent with past studies. We found more consistent estimates in the sign of an interaction effect between population size and precipitation, suggesting that wintering-ground precipitation has a larger effect in years of high population size, especially for pintail. When we used the estimated effects in a population model to derive a sustainable harvest and population size projection (i.e., a yield curve), there was considerable uncertainty in the effect of increased or decreased wintering-ground precipitation on sustainable harvest potential and population size. These results suggest that the mechanism of cross-seasonal effects between winter habitat and reproduction in ducks occurs through a reduction in the strength of density dependence in years of above-average wintering-ground precipitation. We suggest additional investigation of the underlying mechanisms and that habitat managers and decision-makers consider the level of uncertainty in these estimates when attempting to integrate habitat management and harvest management decisions. Collection of annual data on the status of wintering-ground habitat in a rigorous sampling framework would likely be the most direct way to improve understanding of mechanisms and inform management.
Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos
2014-01-01
Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Stockman, A; Sharpe, L T; Fach, C
1999-08-01
We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.
A versatile pitch tracking algorithm: from human speech to killer whale vocalizations.
Shapiro, Ari Daniel; Wang, Chao
2009-07-01
In this article, a pitch tracking algorithm [named discrete logarithmic Fourier transformation-pitch detection algorithm (DLFT-PDA)], originally designed for human telephone speech, was modified for killer whale vocalizations. The multiple frequency components of some of these vocalizations demand a spectral (rather than temporal) approach to pitch tracking. The DLFT-PDA algorithm derives reliable estimations of pitch and the temporal change of pitch from the harmonic structure of the vocal signal. Scores from both estimations are combined in a dynamic programming search to find a smooth pitch track. The algorithm is capable of tracking killer whale calls that contain simultaneous low and high frequency components and compares favorably across most signal to noise ratio ranges to the peak-picking and sidewinder algorithms that have been used for tracking killer whale vocalizations previously.
On piecewise interpolation techniques for estimating solar radiation missing values in Kedah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
2014-12-04
This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less
Hales, Patrick W; Kirkham, Fenella J; Clark, Christopher A
2016-02-01
Many MRI techniques require prior knowledge of the T1-relaxation time of blood (T1bl). An assumed/fixed value is often used; however, T1bl is sensitive to magnetic field (B0), haematocrit (Hct), and oxygen saturation (Y). We aimed to combine data from previous in vitro measurements into a mathematical model, to estimate T1bl as a function of B0, Hct, and Y. The model was shown to predict T1bl from in vivo studies with a good accuracy (± 87 ms). This model allows for improved estimation of T1bl between 1.5-7.0 T while accounting for variations in Hct and Y, leading to improved accuracy of MRI-derived perfusion measurements. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Striped Face-Collins, Marla
Grassland birds are diminishing more steadily and rapidly than other North American birds in general. The nesting success of some grassland bird species depends on the amount of nonproductive vegetation (NPV). To estimate NPV land managers are currently using the Robel pole visual obstruction reading methods. Researchers with the USDA Agricultural Research Service's (ARS) Northern Great Plains Research Laboratory in Mandan, ND, recently established statistical relationships between photosynthetic vegetation (PV), NPV and spectral vegetation indices (SVIs) derived from more sensitive and more detailed, but less accessible and more costly hyperspectral aerial imagery. This study is an extension of this previous work using spectral vegetation indices collected using the Landsat TM sensor, including simple ratios SWIR-SR (rho2215/rho 1650) and SR71 (rho2215 /rho485) to estimate the amount of NPV and bare ground cover, respectively.
Parents' work patterns and adolescent mental health.
Dockery, Alfred; Li, Jianghong; Kendall, Garth
2009-02-01
Previous research demonstrates that non-standard work schedules undermine the stability of marriage and reduce family cohesiveness. Limited research has investigated the effects of parents working non-standard schedules on children's health and wellbeing and no published Australian studies have addressed this important issue. This paper contributes to bridging this knowledge gap by focusing on adolescents aged 15-20 years and by including sole parent families which have been omitted in previous research, using panel data from the Household, Income and Labour Dynamics in Australia Survey. Multilevel linear regression models are estimated to analyse the association between parental work schedules and hours of work and measures of adolescents' mental health derived from the SF-36 Health Survey. Evidence of negative impacts of parents working non-standard hours upon adolescent wellbeing is found to exist primarily within sole parent families.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.
In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less
NASA Astrophysics Data System (ADS)
Pusede, S.; Diskin, G. S.
2015-12-01
We use diurnal variability in near-surface N2O vertical profiles to derive N2O emission rates. Our emissions estimates are ~3 times greater than are accounted for by inventories, a discrepancy in line with results from previous studies using different approaches. We quantify the surface N2O concentration's memory of local surface emissions on previous days to be 50-90%. We compare measured profiles both over and away from a dense N2O source region in the San Joaquin Valley, finding that profile shapes, diurnal variability, and changes in integrated near-surface column abundances are distinct according to proximity to source areas. To do this work, we use aircraft observations from the wintertime DISCOVER-AQ project in California's San Joaquin Valley, a region of intense agricultural activity.
Helioseismology: some current issues concerning model calibration
NASA Astrophysics Data System (ADS)
Gough, D. O.
2002-01-01
Aspects of helioseismic model calibration pertinent to asteroseismological inference are reviewed, with a view to establishing the uncertainties associated with some of the properties of the structure of distant stars that can be inferred from the asteroseismic data to be obtained by Eddington. It is shown that the seismic data to be accrued by Eddington will raise our ability to diagnose the structure of stars enormously, even though some previous estimates of the errors in the derived stellar parameters appear likely to have been somewhat optimistic, because the contribution from the imperfect knowledge of the underlying physics was not accounted for.
A generalized modal shock spectra method for spacecraft loads analysis
NASA Technical Reports Server (NTRS)
Trubert, M.; Salama, M.
1979-01-01
Unlike the traditional shock spectra approach, the generalization presented in this paper permits elastic interaction between the spacecraft and launch vehicle in order to obtain accurate bounds on the spacecraft response and structural loads. In addition, the modal response from a previous launch vehicle transient analysis - with or without a dummy spacecraft - is exploited in order to define a modal impulse as a simple idealization of the actual forcing function. The idealized modal forcing function is then used to derive explicit expressions for an estimate of the bound on the spacecraft structural response and forces.
2009-01-01
eczema accinatum and progressive vaccinia in individuals such as eczema ufferers and the immunocompromised, caused concerns about the accine and prompted...aerosolized RPXV under conditions as previously described [13]. Briefly, the respira- tory function of each rabbit was first measured using whole-body...using respiratory minute volume (Vm) estimates derived from the respiratory function mea- 5 cine 2 s d e o f 2 m p 2 s o c b a w a k w 2 o s t s b s w t
Simple expression for the quantum Fisher information matrix
NASA Astrophysics Data System (ADS)
Šafránek, Dominik
2018-04-01
Quantum Fisher information matrix (QFIM) is a cornerstone of modern quantum metrology and quantum information geometry. Apart from optimal estimation, it finds applications in description of quantum speed limits, quantum criticality, quantum phase transitions, coherence, entanglement, and irreversibility. We derive a surprisingly simple formula for this quantity, which, unlike previously known general expression, does not require diagonalization of the density matrix, and is provably at least as efficient. With a minor modification, this formula can be used to compute QFIM for any finite-dimensional density matrix. Because of its simplicity, it could also shed more light on the quantum information geometry in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahl, W.K.
1997-03-01
The paper describes a study which attempted to extrapolate meaningful elastic-plastic fracture toughness data from flexure tests of a chemical vapor-infiltrated SiC/Nicalon fiber-reinforced ceramic matrix composite. Fibers in the fabricated composites were pre-coated with pyrolytic carbon to varying thicknesses. In the tests, crack length was not measured and the study employed an estimate procedure, previously used successfully for ductile metals, to derive J-R curve information. Results are presented in normalized load vs. normalized displacements and comparative J{sub Ic} behavior as a function of fiber precoating thickness.
A Novel Capacity Analysis for Wireless Backhaul Mesh Networks
NASA Astrophysics Data System (ADS)
Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih
This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.
NASA Technical Reports Server (NTRS)
Li, Xiaoyuan; Jeanloz, Raymond
1987-01-01
Electrical conductivity measurements of Perovskite and a Perovskite-dominated assemblage synthesized from pyroxene and olivine demonstrate that these high-pressure phases are insulating to pressures of 82 GPa and temperatures of 4500 K. Assuming an anhydrous upper mantle composition, the result provides an upper bound of 0.01 S/m for the electrical conductivity of the lower mantle between depths of 700 and 1900 km. This is 2 to 4 orders of magnitude lower than previous estimates of lower-mantle conductivity derived from studies of geomagnetic secular variations.
Status of the Microbial Census
Schloss, Patrick D.; Handelsman, Jo
2004-01-01
Over the past 20 years, more than 78,000 16S rRNA gene sequences have been deposited in GenBank and the Ribosomal Database Project, making the 16S rRNA gene the most widely studied gene for reconstructing bacterial phylogeny. While there is a general appreciation that these sequences are largely unique and derived from diverse species of bacteria, there has not been a quantitative attempt to describe the extent of sequencing efforts to date. We constructed rarefaction curves for each bacterial phylum and for the entire bacterial domain to assess the current state of sampling and the relative taxonomic richness of each phylum. This analysis quantifies the general sense among microbiologists that we are a long way from a complete census of the bacteria on Earth. Moreover, the analysis indicates that current sampling strategies might not be the most effective ones to describe novel diversity because there remain numerous phyla that are globally distributed yet poorly sampled. Based on the current level of sampling, it is not possible to estimate the total number of bacterial species on Earth, but the minimum species richness is 35,498. Considering previous global species richness estimates of 107 to 109, we are certain that this estimate will increase with additional sequencing efforts. The data support previous calls for extensive surveys of multiple chemically disparate environments and of specific phylogenetic groups to advance the census most rapidly. PMID:15590780
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were usedmore » to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.« less
Ozone Production and Control Strategies for Southern Taiwan
NASA Astrophysics Data System (ADS)
Shiu, C.; Liu, S.; Chang, C.; Chen, J.; Chou, C. C.; Lin, C.
2006-12-01
An observation-based modeling (OBM) approach is used to estimate the ozone production efficiency and production rate of O3 (P(O3)) in southern Taiwan. The approach can also provide an indirect estimate of the concentration of OH. Measured concentrations of two aromatic hydrocarbons, i.e. ethylbenzene/m,p-xylene, are used to estimate the degree of photochemical processing and the amounts of photochemically consumed NOx and NMHCs. In addition, a one-dimensional (1d) photochemical model is used to compare with the OBM results. The average ozone production efficiency during the field campaign in Kaohsiung-Pingtung area in Fall 2003 is found to be about 5, comparable to previous works. The relationship of P(O3) with NOx is examined in detail and compared to previous studies. The derived OH concentrations from this approach are in fair agreement with values calculated from the 1d photochemical model. The relationship of total oxidants (e.g. O3+NO2) versus initial NOx and NMHCs suggests that reducing NMHCs are more effective in controlling total oxidants than reducing NOx. For O3 control, reducing NMHC is even more effective than NOx due to the NO titration effect. This observation-based approach provides a good alternative for understanding the production of ozone and formulating ozone control strategy in urban and suburban environment without measurements of peroxy radicals.
Significant contribution of Archaea to extant biomass in marine subsurface sediments.
Lipp, Julius S; Morono, Yuki; Inagaki, Fumio; Hinrichs, Kai-Uwe
2008-08-21
Deep drilling into the marine sea floor has uncovered a vast sedimentary ecosystem of microbial cells. Extrapolation of direct counts of stained microbial cells to the total volume of habitable marine subsurface sediments suggests that between 56 Pg (ref. 1) and 303 Pg (ref. 3) of cellular carbon could be stored in this largely unexplored habitat. From recent studies using various culture-independent techniques, no clear picture has yet emerged as to whether Archaea or Bacteria are more abundant in this extensive ecosystem. Here we show that in subsurface sediments buried deeper than 1 m in a wide range of oceanographic settings at least 87% of intact polar membrane lipids, biomarkers for the presence of live cells, are attributable to archaeal membranes, suggesting that Archaea constitute a major fraction of the biomass. Results obtained from modified quantitative polymerase chain reaction and slot-blot hybridization protocols support the lipid-based evidence and indicate that these techniques have previously underestimated archaeal biomass. The lipid concentrations are proportional to those of total organic carbon. On the basis of this relationship, we derived an independent estimate of amounts of cellular carbon in the global marine subsurface biosphere. Our estimate of 90 Pg of cellular carbon is consistent, within an order of magnitude, with previous estimates, and underscores the importance of marine subsurface habitats for global biomass budgets.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
Evaluating Principal Surrogate Markers in Vaccine Trials in the Presence of Multiphase Sampling
Huang, Ying
2017-01-01
Summary This paper focuses on the evaluation of vaccine-induced immune responses as principal surrogate markers for predicting a given vaccine’s effect on the clinical endpoint of interest. To address the problem of missing potential outcomes under the principal surrogate framework, we can utilize baseline predictors of the immune biomarker(s) or vaccinate uninfected placebo recipients at the end of the trial and measure their immune biomarkers. Examples of good baseline predictors are baseline immune responses when subjects enrolled in the trial have been previously exposed to the same antigen, as in our motivating application of the Zostavax Efficacy and Safety Trial (ZEST). However, laboratory assays of these baseline predictors are expensive and therefore their subsampling among participants is commonly performed. In this paper we develop a methodology for estimating principal surrogate values in the presence of baseline predictor subsampling. Under a multiphase sampling framework, we propose a semiparametric pseudo-score estimator based on conditional likelihood and also develop several alternative semiparametric pseudo-score or estimated likelihood estimators. We derive corresponding asymptotic theories and analytic variance formulas for these estimators. Through extensive numeric studies, we demonstrate good finite sample performance of these estimators and the efficiency advantage of the proposed pseudo-score estimator in various sampling schemes. We illustrate the application of our proposed estimators using data from an immune biomarker study nested within the ZEST trial. PMID:28653408
An Estimate of Avian Mortality at Communication Towers in the United States and Canada
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G.; Sullivan, Lauren M.; Mutrie, Erin; Gauthreaux, Sidney A.; Avery, Michael L.; Crawford, Robert L.; Manville, Albert M.; Travis, Emilie R.; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action. PMID:22558082
NASA Astrophysics Data System (ADS)
Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank
2017-06-01
The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.
An estimate of avian mortality at communication towers in the United States and Canada.
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G; Sullivan, Lauren M; Mutrie, Erin; Gauthreaux, Sidney A; Avery, Michael L; Crawford, Robert L; Manville, Albert M; Travis, Emilie R; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action.
A Comparison of Growth Percentile and Value-Added Models of Teacher Performance. Working Paper #39
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2014-01-01
School districts and state departments of education frequently must choose between a variety of methods to estimating teacher quality. This paper examines under what circumstances the decision between estimators of teacher quality is important. We examine estimates derived from student growth percentile measures and estimates derived from commonly…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less
The role of impoundments in the sediment budget of the conterminous United States
Renwick, W.H.; Smith, S.V.; Bartley, J.D.; Buddemeier, R.W.
2005-01-01
Previous work on sediment budgets for U.S. agricultural regions has concluded that most sediment derived from accelerated erosion is still on the landscape, primarily in colluvial and alluvial deposits. Here we examine the role of small impoundments in the subcontinental sediment budget. A recent inventory based on a 30-m satellite imagery reveals approximately 2.6 million ponds, while extrapolation from a sample of 1:24,000 topographic quadrangles suggests the total may be as large as 8-9 million. These ponds capture an estimated 21% of the total drainage area of the conterminous U.S., representing 25% of total sheet and rill erosion. We estimate the total sedimentation in these small impoundments using three different methods; these estimates range from 0.43 to 1.78 ?? 109 m3 yr-1. Total sedimentation in ???43,000 reservoirs from the National Inventory of Dams is estimated at 1.67 ?? 109 m3 yr-1. Total USLE erosion in 1992 was 2.4 ?? 109 m3 yr-1, and export to coastal areas is estimated at 0.6 ?? 109 m3 yr-1. Total sedimentation in impoundments is large in relation to upland erosion, in apparent contradiction to previous studies that have identified colluvial and alluvial deposition as the primary sinks. Several alternative hypotheses that could help explain this result are proposed. Regardless of which of these alternatives may prove to be the most significant in any given setting, it is clear that most sedimentation is now taking place in subaqueous rather than subaerial environments, and that small impoundments are a major sediment sink. ?? 2005 Elsevier B.V. All rights reserved.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
NASA Astrophysics Data System (ADS)
Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Uspensky, Sergey
2014-05-01
At present physical-mathematical modeling processes of water and heat exchange between vegetation covered land surfaces and atmosphere is the most appropriate method to describe peculiarities of water and heat regime formation for large territories. The developed model of such processes (Land Surface Model, LSM) is intended for calculation evaporation, transpiration by vegetation, soil water content and other water and heat regime characteristics, as well as distributions of the soil temperature and humidity in depth utilizing remote sensing data from satellites on land surface and meteorological conditions. The model parameters and input variables are the soil and vegetation characteristics and the meteorological characteristics, correspondingly. Their values have been determined from ground-based observations or satellite-based measurements by radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/Meteosat-9, -10. The case study has been carried out for the part of the agricultural Central Black Earth region with coordinates 49.5 deg. - 54 deg. N, 31 deg. - 43 deg. E and a total area of 227,300 km2 located in the steppe-forest zone of the European Russia for years 2009-2012 vegetation seasons. From AVHRR data there have been derived the estimates of three types of land surface temperature (LST): land surface skin temperature Tsg, air-foliage temperature Ta and efficient radiation temperature Ts.eff, emissivity E, normalized vegetation index NDVI, vegetation cover fraction B, leaf area index LAI, cloudiness and precipitation. From MODIS data the estimates of LST Tls, E, NDVI and LAI have been obtained. The SEVIRI data have been used to build the estimates of Tls, Ta, E, LAI and precipitation. Previously developed method and technology of above AVHRR-derived estimates have been improved and adapted to the study area. To check the reliability of the Ts.eff and Ta estimations for named seasons the error statistics of their definitions has been analyzed through comparison with data of observations at agricultural meteorological stations of the study region. The mentioned MODIS-based remote sensing products for the same vegetation seasons have been built using data downloaded from the website LP DAAC (NASA). Reliability of the MODIS-derived Tls estimates have been confirmed by results of comparison with similar estimates from synchronous AVHRR, SEVIRI and ground-based data. To retrieve Tls and E from SEVIRI data at daylight and nighttime there have been developed the method and technology of thematic processing these data in IR channels NN 9, 10 (10.8 and 12.0 nm) at three successive times under cloud-free conditions without using exact values of E. This technology has been also adapted to the study area. Analysis of reliability of Tls estimation have been carried out through comparing with synchronous SEVIRI-derived Tls estimates obtained at Land Surface Analysis Satellite Applications Facility (LSA SAF, Lisbon, Portugal) and MODIS-derived Tls estimates. When the first comparison daily - or monthly-averaged values of RMS deviation have not been exceeded 2 deg. C for various dates and months during years 2009-2012 vegetation seasons. RMS deviation of Tls(SEVIRI) from Tls(MODIS) has been in the range of 1.0-3.0 deg. C. The method and technology have been also developed and tested to define Ta values from SEVIRI data at daylight and nighttime. This method is based on using satellite-derived estimates of Tls and regression relationship between Tls and ground-measured values of Ta. Comparison of satellite-based Ta estimates with data of synchronous standard term ground-based observations at the network of meteorological stations of the study area for summer periods of 2009-2012 has given RMS deviation values in the range of 1.8-3.0 deg. C. Formed archive of satellite products has been also supplemented with array of LAI estimates retrieved from SEVIRI data at LSA SAF for the study area and growing seasons 2011-2012. The possibility is shown to use the developed Multi Threshold Method (MTM) for generating the AVHRR- and SEVIRI-based estimates of daily and monthly precipitation amounts for the region of interest The MTM provides the cloud detection and identification of cloud types, estimation of the maximum liquid water content and cloud layer water content, allocation of precipitation zones and determination of instantaneous maximum of precipitation intensities in the pixel range around the clock throughout the year independently of the land surface type. In developing procedures of utilizing satellite estimates of precipitation during the vegetation season in the model there have been built up algorithms and programs of transition from estimating the rainfall intensity to assessment of their daily values. The comparison of the daily, monthly and seasonal AVHRR- and SEVIRI-derived precipitation sums with similar values retrieved from network ground-based observations using weighting interpolation procedure have been carried out. Agreement of all three evaluations is satisfactory. To assimilate remote sensing products into the model the special techniques have been developed including: 1) replacement of ground-measured model parameters LAI and B by their satellite-derived estimates. The possibility of such replacement has been confirmed through various comparisons of: a) LAI behavior for ground- and satellite-derived values; b) modeled values of Ts and Tf , satellite-based estimates of Ts.eff, Tls and Ta and ground-based measurements of LST; c) modeled and measured values of soil water content W and evapotranspiration Ev; 2) utilization of satellite-derived values of LSTs Ts.eff, Tls and Ta, and estimates of precipitation as the input model variables instead of the respective ground-measured temperatures and rainfall when assessing the accuracy of soil water content, evapotranspiration and soil temperature calculations; 3) accounting for the spatial variability of satellite-based LAI, B, LST and precipitation estimates by entering their area-distributed values into the model. For years 2009-2012 vegetation seasons there have been calculated the characteristics of the water and heat regimes of the region under investigation utilizing satellite estimates of vegetation characteristics, LST and precipitation in the model. The calculation results have shown that the discrepancies of evapotranspiration and soil water content values are within acceptable limits.
Bryant, Jessica V; Zeng, Xingyuan; Hong, Xiaojiang; Chatterjee, Helen J; Turvey, Samuel T
2017-03-01
Conservation management requires an evidence-based approach, as uninformed decisions can signify the difference between species recovery and loss. The Hainan gibbon, the world's rarest ape, reportedly exploits the largest home range of any gibbon species, with these apparently large spatial requirements potentially limiting population recovery. However, previous home range assessments rarely reported survey methods, effort, or analytical approaches, hindering critical evaluation of estimate reliability. For extremely rare species where data collection is challenging, it also is unclear what impact such limitations have on estimating home range requirements. We re-evaluated Hainan gibbon spatial ecology using 75 hr of observations from 35 contact days over 93 field-days across dry (November 2010-February 2011) and wet (June 2011-September 2011) seasons. We calculated home range area for three social groups (N = 21 individuals) across the sampling period, seasonal estimates for one group (based on 24 days of observation; 12 days per season), and between-group home range overlap using multiple approaches (Minimum Convex Polygon, Kernel Density Estimation, Local Convex Hull, Brownian Bridge Movement Model), and assessed estimate reliability and representativeness using three approaches (Incremental Area Analysis, spatial concordance, and exclusion of expected holes). We estimated a yearly home range of 1-2 km 2 , with 1.49 km 2 closest to the median of all estimates. Although Hainan gibbon spatial requirements are relatively large for gibbons, our new estimates are smaller than previous estimates used to explain the species' limited recovery, suggesting that habitat availability may be less important in limiting population growth. We argue that other ecological, genetic, and/or anthropogenic factors are more likely to constrain Hainan gibbon recovery, and conservation attention should focus on elucidating and managing these factors. Re-evaluation reveals Hainan gibbon home range as c. 1-2 km 2 . Hainan gibbon home range is, therefore, similar to other Nomascus gibbons. Limited data for extremely rare species does not necessarily prevent derivation of robust home range estimates. © 2016 Wiley Periodicals, Inc.
Neandertal admixture in Eurasia confirmed by maximum-likelihood analysis of three genomes.
Lohse, Konrad; Frantz, Laurent A F
2014-04-01
Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4-7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination.
Neandertal Admixture in Eurasia Confirmed by Maximum-Likelihood Analysis of Three Genomes
Lohse, Konrad; Frantz, Laurent A. F.
2014-01-01
Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4−7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination. PMID:24532731
Refined Rotational Period, Pole Solution, and Shape Model for (3200) Phaethon
NASA Astrophysics Data System (ADS)
Ansdell, Megan; Meech, Karen J.; Hainaut, Olivier; Buie, Marc W.; Kaluna, Heather; Bauer, James; Dundon, Luke
2014-09-01
(3200) Phaethon exhibits both comet- and asteroid-like properties, suggesting it could be a rare transitional object such as a dormant comet or previously volatile-rich asteroid. This justifies detailed study of (3200) Phaethon's physical properties as a better understanding of asteroid-comet transition objects can provide insight into minor body evolution. We therefore acquired time series photometry of (3200) Phaethon over 15 nights from 1994 to 2013, primarily using the Tektronix 2048 × 2048 pixel CCD on the University of Hawaii 2.2 m telescope. We utilized light curve inversion to (1) refine (3200) Phaethon's rotational period to P = 3.6032 ± 0.0008 hr; (2) estimate a rotational pole orientation of λ = +85° ± 13° and β = -20° ± 10° and (3) derive a shape model. We also used our extensive light curve data set to estimate the slope parameter of (3200) Phaethon's phase curve as G ~ 0.06, consistent with C-type asteroids. We discuss how this highly oblique pole orientation with a negative ecliptic latitude supports previous evidence for (3200) Phaethon's origin in the inner main asteroid belt as well as the potential for deeply buried volatiles fueling impulsive yet rare cometary outbursts.
NASA Technical Reports Server (NTRS)
Zent, Aaron P.; Quinn, Richard
1994-01-01
The Martian regolith is the most substantial volatile reservoir on the planet; it holds CO2 as adsorbate, and can exchange that CO2 with the atmosphere-cap system over timescales of 10(exp 5) to 10(exp 6) years. The climatic response to insolation changes caused by obliquity and eccentricity variations depends in part on the total reservoir of adsorbed CO2. Previous estimates of the adsorbate inventory have been made by measuring the adsorptive behavior of one or more Mars-analyog materials, and deriving an empirical equation that described that adsorption as a function of the partial pressure of CO2 and the temperature of the regolith. The current CO2 inventory is that which satisfies adsorptive equilibrium, observed atmospheric pressure, and no permanent CO2 caps. There is laboratory evidence that H2O poisons the CO2 adsorptive capacity of most materials. No consideration of CO2 - H2O co-adsorption was given in previous estimates of the Martian CO2 inventory, although H2O is present in the vapor phase, and so as adsorbate, throughout the regolith.
Factors influencing reporting and harvest probabilities in North American geese
Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F.; White, Gary C.; Caswell, D.F.
2009-01-01
We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo
2009-01-01
In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Ekta, E-mail: jainekta05@gmail.com; Pagare, Gitanjali, E-mail: gita-pagare@yahoo.co.in; Sanyal, S. P., E-mail: sps.physicsbu@gmail.com
2016-05-06
The structural, electronic, elastic, mechanical and thermal properties of AlFe intermetallic compound in B{sub 2}-type (CsCl) structure have been investigated using first-principles calculations. The exchange-correlation term was treated within generalized gradient approximation. Ground state properties i.e. lattice constants (a{sub 0}), bulk modulus (B) and first-order pressure derivative of bulk modulus (B’) are presented. The density of states are derived which show the metallic character of present compound. Our results for C{sub 11}, C{sub 12} and C{sub 44} agree well with previous theoretical data. Using Pugh’s criteria (B/G{sub H} < 1.75), brittle character of AlFe is satisfied. In addition shear modulusmore » (G{sub H}), Young’s modulus (E), sound wave velocities and Debye temperature (θ{sub D}) have also been estimated.« less
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
Techniques for carrying out radiative transfer calculations for the Martian atmospheric dust
NASA Technical Reports Server (NTRS)
Aronson, J. R.; Emslie, A. G.; Strong, P. F.
1974-01-01
A description is given of the modification of a theory on the reflectance of particulate media so as to apply it to analysis of the infrared spectra obtained by the IRIS instrument on Mariner 9. With the aid of this theory and the optical constants of muscovite mica, quartz, andesite, anorthosite, diopside pyroxenite, and dunite, modeling calculations were made to refine previous estimates of the mineralogical composition of the Martian dust particles. These calculations suggest that a feldspar rich mixture is a very likely composition for the dust particles. The optical constants used for anorthosite and diopside pyroxenite were derived during this program from reflectance measurements. Those for the mica were derived from literature reflectance data. Finally, a computer program was written to invert the measured radiance data so as to obtain the absorption coefficient spectrum which should then be independent of the temperature profile and gaseous component effects.
Lidar arc scan uncertainty reduction through scanning geometry optimization
NASA Astrophysics Data System (ADS)
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-01
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.
Lidar arc scan uncertainty reduction through scanning geometry optimization
NASA Astrophysics Data System (ADS)
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-01
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.
NASA Astrophysics Data System (ADS)
Tobin, K. J.; Bennett, M. E.
2017-12-01
Over the last decade autocalibration routines have become commonplace in watershed modeling. This approach is most often used to simulate a streamflow at a basin's outlet. In alpine settings spring/early summer snowmelt is by far the dominant signal in this system. Therefore, there is great potential for a modeled watershed to underperform during other times of the year. This tendency has been noted in many prior studies. In this work, the Soil and Water Assessment Tool (SWAT) model was autocalibrated with the SUFI-2 routine. Two mountainous watersheds from Idaho and Utah were examined. In this study, the basins were calibrated on a monthly satellite based on the MODIS 16A2 product. The gridded MODIS product was ideally suited to derive an estimate of ET on a subbasin basis. Soil moisture data was derived from extrapolation of in situ sites from the SNOwpack TELemetry (SNOTEL) network. Previous work has indicated that in situ soil moisture can be applied to derive an estimate at a significant distance (>30 km) away from the in situ site. Optimized ET and soil moisture parameter values were then applied to streamflow simulations. Preliminary results indicate improved streamflow performance both during calibration (2005-2011) and validation (2012-2014) periods. Streamflow performance was monitored with not only standard objective metrics (bias and Nash Sutcliffe coefficients) but also improved baseflow accuracy, demonstrating the utility of this approach in improving watershed modeling fidelity outside the main snowmelt season.
Eytan, Danny; Goodwin, Andrew J; Greer, Robert; Guerguerian, Anne-Marie; Laussen, Peter C
2017-01-01
Heart rate (HR) and blood pressure (BP) form the basis for monitoring the physiological state of patients. Although norms have been published for healthy and hospitalized children, little is known about their distributions in critically ill children. The objective of this study was to report the distributions of these basic physiological variables in hospitalized critically ill children. Continuous data from bedside monitors were collected and stored at 5-s intervals from 3,677 subjects aged 0-18 years admitted over a period of 30 months to the pediatric and cardiac intensive care units at a large quaternary children's hospital. Approximately 1.13 billion values served to estimate age-specific distributions for these two basic physiological variables: HR and intra-arterial BP. Centile curves were derived from the sample distributions and compared to common reference ranges. Properties such as kurtosis and skewness of these distributions are described. In comparison to previously published reference ranges, we show that children in these settings exhibit markedly higher HRs than their healthy counterparts or children hospitalized on in-patient wards. We also compared commonly used published estimates of hypotension in children (e.g., the PALS guidelines) to the values we derived from critically ill children. This is a first study reporting the distributions of basic physiological variables in children in the pediatric intensive care settings, and the percentiles derived may serve as useful references for bedside clinicians and clinical trials.
Development of demi-span equations for predicting height among the Malaysian elderly.
Ngoh, H J; Sakinah, H; Harsa Amylia, M S
2012-08-01
This study aimed to develop demi-span equations for predicting height in the Malaysian elderly and to explore the applicability of previous published demi-span equations derived from adult populations to the elderly. A cross-sectional study was conducted on Malaysian elderly aged 60 years and older. Subjects were residents of eight shelter homes in Peninsular Malaysia; 204 men and 124 women of Malay, Chinese and Indian ethnicity were included. Measurements of weight, height and demi-span were obtained using standard procedures. Statistical analyses were performed using SPSS version 18.0. The demi-span equations obtained were as follows: Men: Height (cm) = 67.51 + (1.29 x demi-span) - (0.12 x age) + 4.13; Women: Height (cm) = 67.51 + (1.29 x demi-span) - (0.12 x age). Height predicted from these new equations demonstrated good agreement with measured height and no significant differences were found between the mean values of predicted and measured heights in either gender (p>0.05). However, the heights predicted from previous published adult-derived demi-span equations failed to yield good agreement with the measured height of the elderly; significant over-estimation and underestimation of heights tended to occur (p>0.05). The new demi-span equations allow prediction of height with sufficient accuracy in the Malaysian elderly. However, further validation on other elderly samples is needed. Also, we recommend caution when using adult-derived demi-span equations to predict height in elderly people.
Modeling marbled murrelet (Brachyramphus marmoratus) habitat using LiDAR-derived canopy data
Hagar, Joan C.; Eskelson, Bianca N.I.; Haggerty, Patricia K.; Nelson, S. Kim; Vesely, David G.
2014-01-01
LiDAR (Light Detection And Ranging) is an emerging remote-sensing tool that can provide fine-scale data describing vertical complexity of vegetation relevant to species that are responsive to forest structure. We used LiDAR data to estimate occupancy probability for the federally threatened marbled murrelet (Brachyramphus marmoratus) in the Oregon Coast Range of the United States. Our goal was to address the need identified in the Recovery Plan for a more accurate estimate of the availability of nesting habitat by developing occupancy maps based on refined measures of nest-strand structure. We used murrelet occupancy data collected by the Bureau of Land Management Coos Bay District, and canopy metrics calculated from discrete return airborne LiDAR data, to fit a logistic regression model predicting the probability of occupancy. Our final model for stand-level occupancy included distance to coast, and 5 LiDAR-derived variables describing canopy structure. With an area under the curve value (AUC) of 0.74, this model had acceptable discrimination and fair agreement (Cohen's κ = 0.24), especially considering that all sites in our sample were regarded by managers as potential habitat. The LiDAR model provided better discrimination between occupied and unoccupied sites than did a model using variables derived from Gradient Nearest Neighbor maps that were previously reported as important predictors of murrelet occupancy (AUC = 0.64, κ = 0.12). We also evaluated LiDAR metrics at 11 known murrelet nest sites. Two LiDAR-derived variables accurately discriminated nest sites from random sites (average AUC = 0.91). LiDAR provided a means of quantifying 3-dimensional canopy structure with variables that are ecologically relevant to murrelet nesting habitat, and have not been as accurately quantified by other mensuration methods.
Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.
Gorthi, Sai Siva; Rastogi, Pramod
2009-08-15
Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.
Mark D. Nelson; Ronald E. McRoberts; Veronica C. Lessard
2005-01-01
Our objective was to test one application of remote sensing technology for complementing forest resource assessments by comparing a variety of existing satellite image-derived land cover maps with national inventory-derived estimates of United States forest land area. National Resources Inventory (NRI) 1997 estimates of non-Federal forest land area differed by 7.5...
Vegetation, plant biomass, and net primary productivity patterns in the Canadian Arctic
NASA Astrophysics Data System (ADS)
Gould, W. A.; Raynolds, M.; Walker, D. A.
2003-01-01
We have developed maps of dominant vegetation types, plant functional types, percent vegetation cover, aboveground plant biomass, and above and belowground annual net primary productivity for Canada north of the northern limit of trees. The area mapped covers 2.5 million km2 including glaciers. Ice-free land covers 2.3 million km2 and represents 42% of all ice-free land in the Circumpolar Arctic. The maps combine information on climate, soils, geology, hydrology, remotely sensed vegetation classifications, previous vegetation studies, and regional expertise to define polygons drawn using photo-interpretation of a 1:4,000,000 scale advanced very high resolution radiometer (AVHRR) color infrared image basemap. Polygons are linked to vegetation description, associated properties, and descriptive literature through a series of lookup tables in a graphic information systems (GIS) database developed as a component of the Circumpolar Arctic Vegetation Map (CAVM) project. Polygons are classified into 20 landcover types including 17 vegetation types. Half of the region is sparsely vegetated (<50% vegetation cover), primarily in the High Arctic (bioclimatic subzones A-C). Whereas most (86%) of the estimated aboveground plant biomass (1.5 × 1015 g) and 87% of the estimated above and belowground annual net primary productivity (2.28 × 1014 g yr-1) are concentrated in the Low Arctic (subzones D and E). The maps present more explicit spatial patterns of vegetation and ecosystem attributes than have been previously available, the GIS database is useful in summarizing ecosystem properties and can be easily updated and integrated into circumpolar mapping efforts, and the derived estimates fall within the range of current published estimates.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Compatible diagonal-norm staggered and upwind SBP operators
NASA Astrophysics Data System (ADS)
Mattsson, Ken; O'Reilly, Ossian
2018-01-01
The main motivation with the present study is to achieve a provably stable high-order accurate finite difference discretisation of linear first-order hyperbolic problems on a staggered grid. The use of a staggered grid makes it non-trivial to discretise advective terms. To overcome this difficulty we discretise the advective terms using upwind Summation-By-Parts (SBP) operators, while the remaining terms are discretised using staggered SBP operators. The upwind and staggered SBP operators (for each order of accuracy) are compatible, here meaning that they are based on the same diagonal norms, allowing for energy estimates to be formulated. The boundary conditions are imposed using a penalty (SAT) technique, to guarantee linear stability. The resulting SBP-SAT approximations lead to fully explicit ODE systems. The accuracy and stability properties are demonstrated for linear hyperbolic problems in 1D, and for the 2D linearised Euler equations with constant background flow. The newly derived upwind and staggered SBP operators lead to significantly more accurate numerical approximations, compared with the exclusive usage of (previously derived) central-difference first derivative SBP operators.
VLT/SPHERE observations and shape reconstruction of asteroid (6) Hebe
NASA Astrophysics Data System (ADS)
Marsset, Michael; Carry, Benoit; Dumas, Christophe; Vernazza, Pierre; Jehin, Emmanuel; Sonnett, Sarah M.; Fusco, Thierry
2016-10-01
(6) Hebe is a large main-belt asteroid, accounting for about half a percent of the mass of the asteroid belt. Its spectral characteristics and close proximity to dynamical resonances within the main-belt (the 3:1 Kirkwood gap and the nu6 resonance) make it a probable parent body of the H-chondrites and IIE iron meteorites found on Earth.We present new AO images of Hebe obtained with the high-contrast imager SPHERE (Beuzit et al. 2008) as part of the science verification of the instrument. Hebe was observed close to its opposition date and throughout its rotation in order to derive its 3-D shape, and to allow a study of its surface craters. Our observations reveal impact zones that witness a severe collisional disruption for this asteroid. When combined to previous AO images and available lightcurves (both from the literature and from recent optical observations by our team), these new observations allow us to derive a reliable shape model using our KOALA algorithm (Carry et al. 2010). We further derive an estimate of Hebe's density based on its known astrometric mass.
A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES
Satellite data provide new opportunities to study the regional distribution of particulate matter. The aerosol optical depth (AOD) - a derived estimate from the satellite measured irradiance, can be compared against model derived estimate to provide an evaluation of the columnar ...
Geochemistry and Flux of Terrigenous Dissolved Organic Matter to the Arctic Ocean
NASA Astrophysics Data System (ADS)
Spencer, R. G.; Mann, P. J.; Hernes, P. J.; Tank, S. E.; Striegl, R. G.; Dyda, R. Y.; Peterson, B. J.; McClelland, J. W.; Holmes, R. M.
2011-12-01
Rivers draining into the Arctic Ocean exhibit high concentrations of terrigenous dissolved organic carbon (DOC) and recent studies indicate that DOC export is changing due to climatic warming and alteration in permafrost condition. The fate of exported DOC in the Arctic Ocean is of key importance for understanding the regional carbon cycle and remains a point of discussion in the literature. As part of the Arctic Great Rivers Observatory (Arctic-GRO) project, samples were collected for DOC, chromophoric dissolved organic matter (CDOM) and lignin phenols from the Ob', Yenisey, Lena, Kolyma, Mackenzie and Yukon rivers in 2009 - 2010. DOC and lignin concentrations were elevated during the spring freshet and measurements related to DOC composition indicated an increasing contribution from terrestrial vascular plant sources at this time of year (e.g. lignin carbon-normalized yield, CDOM spectral slope, SUVA254, humic-like fluorescence). CDOM absorption was found to correlate strongly with both DOC (r2=0.83) and lignin concentration (r2=0.92) across the major arctic rivers. Utilizing these relationships we modeled loads for DOC and lignin export from high-resolution CDOM measurements (daily across the freshet) to derive improved flux estimates, particularly from the dynamic spring discharge maxima period when the majority of DOC and lignin export occurs. The new load estimates for DOC and lignin are higher than previous evaluations, emphasizing that if these are more representative of current arctic riverine export, terrigenous DOC is transiting through the Arctic Ocean at a faster rate than previously thought. It is apparent that higher resolution sampling of arctic rivers is exceptionally valuable with respect to deriving accurate fluxes and we highlight the potential of CDOM in this role for future studies and the applicability of in-situ CDOM sensors.
A surprising dynamical mass for V773 Tau B
Boden, Andrew F.; Torres, Guillermo; Duchene, Gaspard; ...
2012-02-10
Here, we report on new high-resolution imaging and spectroscopy on the multiple T Tauri star system V773 Tau over the 2003-2009 period. With these data we derive relative astrometry, photometry between the A and B components, and radial velocity (RV) of the A-subsystem components. Combining these new data with previously published astrometry and RVs, we update the relative A-B orbit model. This updated orbit model, the known system distance, and A-subsystem parameters yield a dynamical mass for the B component for the first time. Remarkably, the derived B dynamical mass is in the range 1.7-3.0 M⊙. This is much highermore » than previous estimates and suggests that like A, B is also a multiple stellar system. Among these data, spatially resolved spectroscopy provides new insight into the nature of the B component. Similar to A, these near-IR spectra indicate that the dominant source in B is of mid-K spectral type. If B is in fact a multiple star system as suggested by the dynamical mass estimate, the simplest assumption is that B is composed of similar ~1.2 M ⊙ pre-main-sequence stars in a close (<1 AU) binary system. This inference is supported by line-shape changes in near-IR spectroscopy of B, tentatively interpreted as changing RV among components in V773 Tau B. Relative photometry indicates that B is highly variable in the near-IR. The most likely explanation for this variability is circum-B material resulting in variable line-of-sight extinction. The distribution of this material must be significantly affected by both the putative B multiplicity and the A-B orbit.« less
Brookes, V J; Barry, S C; Hernández-Jover, M; Ward, M P
2017-04-01
The objective of this study was to trial point of truth calibration (POTCal) as a novel method for disease prioritisation. To illustrate the application of this method, we used a previously described case-study of prioritisation of exotic diseases for the pig industry in Australia. Disease scenarios were constructed from criteria which described potential impact and pig-producers were asked to score the importance of each scenario. POTCal was used to model participants' estimates of disease importance as a function of the criteria, to derive a predictive model to prioritise a range of exotic diseases. The best validation of producers' estimates was achieved using a model derived from all responses. The highest weighted criteria were attack rate, case fatality rate and market loss, and the highest priority diseases were the vesicular diseases followed by swine fevers and zoonotic encephalitides. Comparison of results with a previous study in which probabilistic inversion was used to prioritise diseases for the same group of producers highlighted differences between disease prioritisation methods. Overall, this study demonstrated that POTCal can be used for disease prioritisation. An advantage of POTCal is that valid models can be developed that reflect decision-makers' heuristics. Specifically, this evaluation of the use of POTCal in animal health illustrates how the judgements of participants can be incorporated into a decision-making process. Further research is needed to investigate the influence of scenarios presented to participants during POTCal evaluations, and the robustness of this approach applied to different disease issues (e.g. exotic versus endemic) and production types (e.g. intensive versus extensive). To our knowledge, this is the first report of the use of POTCal for disease prioritisation. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Mining User Dwell Time for Personalized Web Search Re-Ranking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Jiang, Hao; Lau, Francis
We propose a personalized re-ranking algorithm through mining user dwell times derived from a user's previously online reading or browsing activities. We acquire document level user dwell times via a customized web browser, from which we then infer conceptword level user dwell times in order to understand a user's personal interest. According to the estimated concept word level user dwell times, our algorithm can estimate a user's potential dwell time over a new document, based on which personalized webpage re-ranking can be carried out. We compare the rankings produced by our algorithm with rankings generated by popular commercial search enginesmore » and a recently proposed personalized ranking algorithm. The results clearly show the superiority of our method. In this paper, we propose a new personalized webpage ranking algorithmthrough mining dwell times of a user. We introduce a quantitative model to derive concept word level user dwell times from the observed document level user dwell times. Once we have inferred a user's interest over the set of concept words the user has encountered in previous readings, we can then predict the user's potential dwell time over a new document. Such predicted user dwell time allows us to carry out personalized webpage re-ranking. To explore the effectiveness of our algorithm, we measured the performance of our algorithm under two conditions - one with a relatively limited amount of user dwell time data and the other with a doubled amount. Both evaluation cases put our algorithm for generating personalized webpage rankings to satisfy a user's personal preference ahead of those by Google, Yahoo!, and Bing, as well as a recent personalized webpage ranking algorithm.« less
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Towards national-scale greenhouse gas emissions evaluation with robust uncertainty estimates
NASA Astrophysics Data System (ADS)
Rigby, Matthew; Swallow, Ben; Lunt, Mark; Manning, Alistair; Ganesan, Anita; Stavert, Ann; Stanley, Kieran; O'Doherty, Simon
2016-04-01
Through the Deriving Emissions related to Climate Change (DECC) network and the Greenhouse gAs Uk and Global Emissions (GAUGE) programme, the UK's greenhouse gases are now monitored by instruments mounted on telecommunications towers and churches, on a ferry that performs regular transects of the North Sea, on-board a research aircraft and from space. When combined with information from high-resolution chemical transport models such as the Met Office Numerical Atmospheric dispersion Modelling Environment (NAME), these measurements are allowing us to evaluate emissions more accurately than has previously been possible. However, it has long been appreciated that current methods for quantifying fluxes using atmospheric data suffer from uncertainties, primarily relating to the chemical transport model, that have been largely ignored to date. Here, we use novel model reduction techniques for quantifying the influence of a set of potential systematic model errors on the outcome of a national-scale inversion. This new technique has been incorporated into a hierarchical Bayesian framework, which can be shown to reduce the influence of subjective choices on the outcome of inverse modelling studies. Using estimates of the UK's methane emissions derived from DECC and GAUGE tall-tower measurements as a case study, we will show that such model systematic errors have the potential to significantly increase the uncertainty on national-scale emissions estimates. Therefore, we conclude that these factors must be incorporated in national emissions evaluation efforts, if they are to be credible.
Spectral Energy Distribution and Bolometric Luminosity of the Cool Brown Dwarf Gliese 229B
NASA Technical Reports Server (NTRS)
Matthews, K.; Nakajima, T.; Kulkarni, S. R.; Oppenheimer, B. R.
1996-01-01
Infrared broadband photometry of the cool brown dwarf Gliese 229B extending in wavelength from 0.8 to 10.5 micron is reported. These results are derived from both new data and reanalyzed, previously published data. Existing spectral data reported have been rereduced and recalibrated. The close proximity of the bright Gliese 229A to the dim Gliese 229B required the use of special techniques for the observations and also for the data analysis. We describe these procedures in detail. The observed luminosity between 0.8 and 10.5 micron is (4.9 +/- 0.6) x 10(exp -6) solar luminosity. The observed spectral energy distribution is in overall agreement with a dust-free model spectrum by Tsuji et al. for T(eff) approx. equal to 900 K. If this model is used to derive the bolometric correction, the best estimate of the bolometric luminosity is 6.4 x 10(exp -6) solar luminosity and 50% of this luminosity ties between 1 and 2.5 microns. Our best estimate of the effective temperature is 900 K. From the observed near-infrared spectrum and the spectral energy distribution, the brightness temperatures (T(sub B) are estimated. The highest, T(sub B) = 1640 K, is seen at the peak of the J band spectrum, while the lowest, T(sub B) is less than or equal to 600 K, is at 3.4 microns, which corresponds to the location of the fundamental methane band.
Roberts-Ashby, Tina; Brandon N. Ashby,
2016-01-01
This paper demonstrates geospatial modification of the USGS methodology for assessing geologic CO2 storage resources, and was applied to the Pre-Punta Gorda Composite and Dollar Bay reservoirs of the South Florida Basin. The study provides detailed evaluation of porous intervals within these reservoirs and utilizes GIS to evaluate the potential spatial distribution of reservoir parameters and volume of CO2 that can be stored. This study also shows that incorporating spatial variation of parameters using detailed and robust datasets may improve estimates of storage resources when compared to applying uniform values across the study area derived from small datasets, like many assessment methodologies. Geospatially derived estimates of storage resources presented here (Pre-Punta Gorda Composite = 105,570 MtCO2; Dollar Bay = 24,760 MtCO2) were greater than previous assessments, which was largely attributed to the fact that detailed evaluation of these reservoirs resulted in higher estimates of porosity and net-porous thickness, and areas of high porosity and thick net-porous intervals were incorporated into the model, likely increasing the calculated volume of storage space available for CO2 sequestration. The geospatial method for evaluating CO2 storage resources also provides the ability to identify areas that potentially contain higher volumes of storage resources, as well as areas that might be less favorable.
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
NASA Astrophysics Data System (ADS)
Constable, C.; Johnson, C. L.
2009-05-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509
Can high resolution topographic surveys provide reliable grain size estimates?
NASA Astrophysics Data System (ADS)
Pearson, Eleanor; Smith, Mark; Klaar, Megan; Brown, Lee
2017-04-01
High resolution topographic surveys contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-grid scale topographic variability (or 'surface roughness') to particle grain size by deriving empirical relationships between the two. Such relationships would permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing data to drive distributed hydraulic models and revolutionising monitoring of river restoration projects. However, comparison of previous roughness-grain-size relationships shows substantial variability between field sites and do not take into account differences in patch-scale facies. This study explains this variability by identifying the factors that influence roughness-grain-size relationships. Using 275 laboratory and field-based Structure-from-Motion (SfM) surveys, we investigate the influence of: inherent survey error; irregularity of natural gravels; particle shape; grain packing structure; sorting; and form roughness on roughness-grain-size relationships. A suite of empirical relationships is presented in the form of a decision tree which improves estimations of grain size. Results indicate that the survey technique itself is capable of providing accurate grain size estimates. By accounting for differences in patch facies, R2 was seen to improve from 0.769 to R2 > 0.9 for certain facies. However, at present, the method is unsuitable for poorly sorted gravel patches. In future, a combination of a surface roughness proxy with photosieving techniques using SfM-derived orthophotos may offer improvements on using either technique individually.
Triplet ultrasound growth parameters.
Vora, Neeta L; Ruthazer, Robin; House, Michael; Chelmow, David
2006-03-01
To create ultrasound growth curves for normal growth of fetal triplets using statistical methodology that properly accounts for similarities of growth of fetuses within a mother as well as repeated measurements over time for each fetus. In this longitudinal study, all triplet pregnancies managed at a single tertiary center from 1992-2004 were reviewed. Fetuses with major anomalies, prior selective reduction, or fetal demise were excluded. Data from early and late gestation in which there were fewer than 30 fetal measurements available for analysis were excluded. We used multilevel models to account for variation in growth within a single fetus over time, variations in growth between multiple fetuses within a single mother, and variations in fetal growth between mothers. Medians (50th), 10th, and 90th percentiles were estimated by the creation of multiple quadratic growth models from bootstrap samples adapting a previously published method to compute prediction intervals. Estimated fetal weight was derived from Hadlock's formula. One hundred fifty triplet pregnancies were identified. Twenty-seven pregnancies were excluded for the following reasons: missing records (23), fetal demise (3), and fetal anomaly (1). The study group consisted of 123 pregnancies. The gestational age range was restricted to 14-34 weeks. Figures and tables were developed showing medians, 10th and 90th percentiles for estimated fetal weight, femur length, biparietal diameter, abdominal circumference, and head circumference. Growth curves for triplet pregnancies were derived. These may be useful for identification of abnormal growth in triplet fetuses. III.
Examining the role of land motion in estimating altimeter system drifts
NASA Astrophysics Data System (ADS)
Leuliette, E. W.; Plagge, A. M.
2016-12-01
With the operational onset of Jason-3 and Sentinel-3 missions, the determination of mission-specific altimeter bias drift via the global tide gauge network is more crucial than ever. Here we extend previously presented work comparing the effect of vertical land motion (VLM) at tide gauges on derived drift for the combined TOPEX/Jason-1/Jason-2 dataset with the addition of Jason-3, and the combined Envisat/AltiKa record, as well as Sentinel-3 as data become available. Estimated drifts for each mission are considered using seven VLM estimations: (1) GPS-based methodology by King et al., 2012 [updated] at University of Tasmania; (2) GPS time series produced by JPL (http://sideshow.jpl.nasa.gov/post/series.html); the Université de La Rochelle's (3) ULR5 (Santamaria-Gomez 2012) and (4) ULR6; (5) GPS time series produced at the Nevada Geodetic Laboratory, and two versions using glacial isostatic adjustment: (6) those by Peltier et al. (2015) and (7) those by A, Wahr, and Zhong (2013). The drift estimates from the combined TOPEX/Jason dataset vary by 0.7 mm/year depending on the VLM estimate. The combined Envisat/AltiKa estimated drifts vary slightly less, more on the order of 0.5 mm/yr. In addition, we demonstrate the sensitivity of the drift estimates to tide gauge selection.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
NASA Astrophysics Data System (ADS)
Ransom, Katherine M.; Bell, Andrew M.; Barber, Quinn E.; Kourakos, George; Harter, Thomas
2018-05-01
This study is focused on nitrogen loading from a wide variety of crop and land-use types in the Central Valley, California, USA, an intensively farmed region with high agricultural crop diversity. Nitrogen loading rates for several crop types have been measured based on field-scale experiments, and recent research has calculated nitrogen loading rates for crops throughout the Central Valley based on a mass balance approach. However, research is lacking to infer nitrogen loading rates for the broad diversity of crop and land-use types directly from groundwater nitrate measurements. Relating groundwater nitrate measurements to specific crops must account for the uncertainty about and multiplicity in contributing crops (and other land uses) to individual well measurements, and for the variability of nitrogen loading within farms and from farm to farm for the same crop type. In this study, we developed a Bayesian regression model that allowed us to estimate land-use-specific groundwater nitrogen loading rate probability distributions for 15 crop and land-use groups based on a database of recent nitrate measurements from 2149 private wells in the Central Valley. The water and natural, rice, and alfalfa and pasture groups had the lowest median estimated nitrogen loading rates, each with a median estimate below 5 kg N ha-1 yr-1. Confined animal feeding operations (dairies) and citrus and subtropical crops had the greatest median estimated nitrogen loading rates at approximately 269 and 65 kg N ha-1 yr-1, respectively. In general, our probability-based estimates compare favorably with previous direct measurements and with mass-balance-based estimates of nitrogen loading. Nitrogen mass-balance-based estimates are larger than our groundwater nitrate derived estimates for manured and nonmanured forage, nuts, cotton, tree fruit, and rice crops. These discrepancies are thought to be due to groundwater age mixing, dilution from infiltrating river water, or denitrification between the time when nitrogen leaves the root zone (point of reference for mass-balance-derived loading) and the time and location of groundwater measurement.
Estimates of N2O, NO and NH3 Emissions From Croplands in East, Southeast and South Asia
NASA Astrophysics Data System (ADS)
Yan, X.; Ohara, T.; Akimoto, H.
2002-12-01
Agricultural activities have greatly altered the global nitrogen cycle and produced nitrogenous gases of environmentally significance. More than half of the global chemical nitrogen fertilizer is used for crop production in East, Southeast and South Asia where rice the center of nutrition. Emissions of nitrous oxide (N2O), nitric oxide (NO) and ammonia (NH3) from croplands in this region were estimated by considering both background emission and emissions resulted from nitrogen added to croplands, including chemical nitrogen, animal manure used as fertilizer, biological fixed nitrogen and nitrogen in crop residue returned to field. Background emission fluxes of N2O and NO from croplands were estimated at 1.16 and 0.52 kg N ha-1yr-1, respectively. A fertilizer-induced N2O emission factor of 1.25% for upland was adopted from IPCC guidelines, and a factor of 0.25% was derived for paddy field from measurements. Total N2O emission from croplands in the region was estimated at 1.16 Tg N yr-1, with 41% contributed by background emission which was not considered in previous global estimates. However, the average fertilizer-induced N2O emission is only 0.93%, lower than the default IPCC value of 1.25% due to the low emission factor from paddy field. A fertilizer-induced NO emission factor of 0.66% for upland was derived from field measurements, and a factor of 0.13% was assumed for paddy field. Total NO emission was 572 Gg N yr-1 in the region, with 38% due to background emission. Average fertilizer-induce NO emission factor was 0.48%. Extrapolating this estimate to global scale will result in a global NO emission from cropland of 1.6 Tg N yr-1, smaller than other global estimates. Total NH3 emission was estimated at 11.8 Tg N yr-1. The use of urea and ammonium bicarbonate and the cultivation of rice lead to a high average NH3 loss rate of chemical fertilizer in the region. Emissions were distributed at 0.5° grid by using a global landuse database.
Influence of sectioning location on age estimates from common carp dorsal spines
Watkins, Carson J.; Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.
2015-01-01
Dorsal spines have been shown to provide precise age estimates for Common CarpCyprinus carpio and are commonly used by management agencies to gain information on Common Carp populations. However, no previous studies have evaluated variation in the precision of age estimates obtained from different sectioning locations along Common Carp dorsal spines. We evaluated the precision, relative readability, and distribution of age estimates obtained from various sectioning locations along Common Carp dorsal spines. Dorsal spines from 192 Common Carp were sectioned at the base (section 1), immediately distal to the basal section (section 2), and at 25% (section 3), 50% (section 4), and 75% (section 5) of the total length of the dorsal spine. The exact agreement and within-1-year agreement among readers was highest and the coefficient of variation lowest for section 2. In general, age estimates derived from sections 2 and 3 had similar age distributions and displayed the highest concordance in age estimates with section 1. Our results indicate that sections taken at ≤ 25% of the total length of the dorsal spine can be easily interpreted and provide precise estimates of Common Carp age. The greater consistency in age estimates obtained from section 2 indicates that by using a standard sectioning location, fisheries scientists can expect age-based estimates of population metrics to be more comparable and thus more useful for understanding Common Carp population dynamics.
NASA Astrophysics Data System (ADS)
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Revised estimates for direct-effect recreational jobs in the interior Columbia River basin.
Lisa K. Crone; Richard W. Haynes
1999-01-01
This paper reviews the methodology used to derive the original estimates for direct employment associated with recreation on Federal lands in the interior Columbia River basin (the basin), and details the changes in methodology and data used to derive new estimates. The new analysis resulted in an estimate of 77,655 direct-effect jobs associated with recreational...
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.
The Impact of Water Loading on Estimates of Postglacial Decay Times in Hudson Bay
NASA Astrophysics Data System (ADS)
Han, H. K.; Gomez, N. A.
2016-12-01
Ongoing glacial isostatic adjustment (GIA) due to surface loading (ice and water) variations since the Last Glacial Maximum (LGM) has been contributing to sea level changes globally throughout the Holocene, especially in regions like the Canada that were heavily glaciated during the LGM. The spatial and temporal distribution of GIA and relative sea level change are attributed to the ice history and the rheological structure of the solid Earth, both of which are uncertain. It has been shown that relative sea level curves in previously glaciated regions follow an exponential-like form, and the post glacial decay times associated with that form have weak sensitivity to the details of the ice loading history (Andrews 1970, Walcott 1980, Mitrovica & Peltier 1995). Post glacial decay time estimates may therefore be used to constrain the Earth's structure and improve GIA predictions. However, estimates of decay times in Hudson Bay in the literature differ significantly due to a number of sources of uncertainty and bias (Mitrovica et al. 2000). Previous decay time analyses have not considered the potential bias that surface loading associated with Holocene sea level changes can introduce in decay time estimates derived from nearby relative sea level observations. We explore the spatial patterns of post glacial decay time predictions in previously glaciated regions, and their sensitivity to ice and water loading history. We compute post glacial sea level changes over the last deglaciation from 21ka to the modern associated with the ICE5G (Peltier, 2004) and ICE6G (Argus et al. 2014, Peltier et al. 2015) ice history models. We fit exponential curves to the modeled relative sea level changes, and compute maps of post glacial decay time predictions across North America and the Arctic. In addition, we decompose the modeled relative sea level changes into contributions from water and ice loading effects, and compute the impact of water loading redistribution since the LGM on present day decay times. We show that Holocene water loading in the Hudson Bay may introduce significant bias in decay time estimates and we highlight locations where biases are minimized.
Arita, Minetaro; Zhu, Shuang-Li; Yoshida, Hiromu; Yoneyama, Tetsuo; Miyamura, Tatsuo; Shimizu, Hiroyuki
2005-01-01
Outbreaks of poliomyelitis caused by circulating vaccine-derived polioviruses (cVDPVs) have been reported in areas where indigenous wild polioviruses (PVs) were eliminated by vaccination. Most of these cVDPVs contained unidentified sequences in the nonstructural protein coding region which were considered to be derived from human enterovirus species C (HEV-C) by recombination. In this study, we report isolation of a Sabin 3-derived PV recombinant (Cambodia-02) from an acute flaccid paralysis (AFP) case in Cambodia in 2002. We attempted to identify the putative recombination counterpart of Cambodia-02 by sequence analysis of nonpolio enterovirus isolates from AFP cases in Cambodia from 1999 to 2003. Based on the previously estimated evolution rates of PVs, the recombination event resulting in Cambodia-02 was estimated to have occurred within 6 months after the administration of oral PV vaccine (99.3% nucleotide identity in VP1 region). The 2BC and the 3Dpol coding regions of Cambodia-02 were grouped into the genetic cluster of indigenous coxsackie A virus type 17 (CAV17) (the highest [87.1%] nucleotide identity) and the cluster of indigenous CAV13-CAV18 (the highest [94.9%] nucleotide identity) by the phylogenic analysis of the HEV-C isolates in 2002, respectively. CAV13-CAV18 and CAV17 were the dominant HEV-C serotypes in 2002 but not in 2001 and in 2003. We found a putative recombination between CAV13-CAV18 and CAV17 in the 3CDpro coding region of a CAV17 isolate. These results suggested that a part of the 3Dpol coding region of PV3(Cambodia-02) was derived from a HEV-C strain genetically related to indigenous CAV13-CAV18 strains in 2002 in Cambodia. PMID:16188967
Predicting ionizing radiation exposure using biochemically-inspired genomic machine learning.
Zhao, Jonathan Z L; Mucaki, Eliseos J; Rogan, Peter K
2018-01-01
Background: Gene signatures derived from transcriptomic data using machine learning methods have shown promise for biodosimetry testing. These signatures may not be sufficiently robust for large scale testing, as their performance has not been adequately validated on external, independent datasets. The present study develops human and murine signatures with biochemically-inspired machine learning that are strictly validated using k-fold and traditional approaches. Methods: Gene Expression Omnibus (GEO) datasets of exposed human and murine lymphocytes were preprocessed via nearest neighbor imputation and expression of genes implicated in the literature to be responsive to radiation exposure (n=998) were then ranked by Minimum Redundancy Maximum Relevance (mRMR). Optimal signatures were derived by backward, complete, and forward sequential feature selection using Support Vector Machines (SVM), and validated using k-fold or traditional validation on independent datasets. Results: The best human signatures we derived exhibit k-fold validation accuracies of up to 98% ( DDB2 , PRKDC , TPP2 , PTPRE , and GADD45A ) when validated over 209 samples and traditional validation accuracies of up to 92% ( DDB2 , CD8A , TALDO1 , PCNA , EIF4G2 , LCN2 , CDKN1A , PRKCH , ENO1 , and PPM1D ) when validated over 85 samples. Some human signatures are specific enough to differentiate between chemotherapy and radiotherapy. Certain multi-class murine signatures have sufficient granularity in dose estimation to inform eligibility for cytokine therapy (assuming these signatures could be translated to humans). We compiled a list of the most frequently appearing genes in the top 20 human and mouse signatures. More frequently appearing genes among an ensemble of signatures may indicate greater impact of these genes on the performance of individual signatures. Several genes in the signatures we derived are present in previously proposed signatures. Conclusions: Gene signatures for ionizing radiation exposure derived by machine learning have low error rates in externally validated, independent datasets, and exhibit high specificity and granularity for dose estimation.
ESTABLISHING {alpha} Oph AS A PROTOTYPE ROTATOR: IMPROVED ASTROMETRIC ORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinkley, Sasha; Hillenbrand, Lynne; Crepp, Justin R.
2011-01-10
The nearby star {alpha} Oph (Ras Alhague) is a rapidly rotating A5IV star spinning at {approx} 89% of its breakup velocity. This system has been imaged extensively by interferometric techniques, giving a precise geometric model of the star's oblateness and the resulting temperature variation on the stellar surface. Fortuitously, {alpha} Oph has a previously known stellar companion, and characterization of the orbit provides an independent, dynamically based check of both the host star and the companion mass. Such measurements are crucial to constrain models of such rapidly rotating stars. In this study, we combine eight years of adaptive optics imagingmore » data from the Palomar, AEOS, and CFHT telescopes to derive an improved, astrometric characterization of the companion orbit. We also use photometry from these observations to derive a model-based estimate of the companion mass. A fit was performed on the photocenter motion of this system to extract a component mass ratio. We find masses of 2.40{sup +0.23}{sub -0.37} M{sub sun} and 0.85{sup +0.06}{sub -0.04} M{sub sun} for {alpha} Oph A and {alpha} Oph B, respectively. Previous orbital studies of this system found a mass too high for this system, inconsistent with stellar evolutionary calculations. Our measurements of the host star mass are more consistent with these evolutionary calculations, but with slightly higher uncertainties. In addition to the dynamically derived masses, we use IJHK photometry to derive a model-based mass for {alpha} Oph B, of 0.77 {+-} 0.05 M{sub sun} marginally consistent with the dynamical masses derived from our orbit. Our model fits predict a periastron passage on 2012 April 19, with the two components having a 50 mas separation from 2012 March to May. A modest amount of interferometric and radial velocity data during this period could provide a mass determination of this star at the few percent level.« less
Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A
2004-10-01
In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namikawa, Toshiya
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.
Schroder, Kai; Zinke, Arno; Klein, Reinhard
2015-02-01
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
Chromium Diffusion Doping on ZnSe Crystals
NASA Technical Reports Server (NTRS)
Journigan, Troy D.; Chen, K.-T.; Chen, H.; Burger, A.; Schaffers, K.; Page, R. H.; Payne, S. A.
1997-01-01
Chromium doped zinc selenide crystal have recently been demonstrated to be a promising material for near-IR room temperature tunable lasers which have an emission range of 2-3 micrometers. In this study a new diffusion doping process has been developed for incorporation of Cr(+2) ion into ZnSe wafers. This process has been successfully performed under isothermal conditions, at temperatures above 800 C. Concentrations in excess of 10(exp 19) Cr(+2) ions/cu cm, an order of magnitude larger than previously reported in melt grown ZnSe material, have been obtained by diffusion doping, as estimated from optical absorption measurements. The diffusivity was estimated to be about 10(exp -8) sq cm/sec using a thin film diffusion model. Resistivity was derived from current-voltage measurements and in the range of 10(exp 13) and 10(exp 16) omega-cm. The emission spectra and temperature dependent lifetime data will also be presented and discussed.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Correspondence regarding Zhong et al., BMC Bioinformatics 2013 Mar 7;14:89.
Kuhn, Alexandre
2014-11-28
Computational expression deconvolution aims to estimate the contribution of individual cell populations to expression profiles measured in samples of heterogeneous composition. Zhong et al. recently proposed Digital Sorting Algorithm (BMC Bioinformatics 2013 Mar 7;14:89) and showed that they could accurately estimate population-specific expression levels and expression differences between two populations. They compared DSA with Population-Specific Expression Analysis (PSEA), a previous deconvolution method that we developed to detect expression changes occurring within the same population between two conditions (e.g. disease versus non-disease). However, Zhong et al. compared PSEA-derived specific expression levels across different cell populations. Specific expression levels obtained with PSEA cannot be directly compared across different populations as they are on a relative scale. They are accurate as we demonstrate by deconvolving the same dataset used by Zhong et al. and, importantly, allow for comparison of population-specific expression across conditions.
Hot moments in spawning aggregations: implications for ecosystem-scale nutrient cycling
NASA Astrophysics Data System (ADS)
Archer, Stephanie K.; Allgeier, Jacob E.; Semmens, Brice X.; Heppell, Scott A.; Pattengill-Semmens, Christy V.; Rosemond, Amy D.; Bush, Phillippe G.; McCoy, Croy M.; Johnson, Bradley C.; Layman, Craig A.
2015-03-01
Biogeochemical hot moments occur when a temporary increase in availability of one or more limiting reactants results in elevated rates of biogeochemical reactions. Many marine fish form transient spawning aggregations, temporarily increasing their local abundance and thus nutrients supplied via excretion at the aggregation site. In this way, nutrients released by aggregating fish could create a biogeochemical hot moment. Using a combination of empirical and modeling approaches, we estimate nitrogen and phosphorus supplied by aggregating Nassau grouper ( Epinephelus striatus). Data suggest aggregating grouper supply up to an order-of-magnitude more nitrogen and phosphorus than daily consumer-derived nutrient supply on coral reefs without aggregating fish. Comparing current and historic aggregation-level excretion estimates shows that overfishing reduced nutrients supplied by aggregating fish by up to 87 %. Our study illustrates a previously unrecognized ecosystem viewpoint regarding fish spawning aggregations and provides an additional perspective on the repercussions of their overexploitation.
A mechanism for crustal recycling on Venus
NASA Technical Reports Server (NTRS)
Lenardic, A.; Kaula, W. M.; Bindschadler, D. L.
1993-01-01
Entrainment of lower crust by convective mantle downflows is proposed as a crustal recycling mechanism on Venus. The mechanism is characterized by thin sheets of crust being pulled into the mantle by viscous flow stresses. Finite element models of crust/mantle interaction are used to explore tectonic conditions under which crustal entrainment may occur. The recycling scenarios suggested by the numerical models are analogous to previously studied problems for which analytic and experimental relationships assessing entrainment rates have been derived. We use these relationships to estimate crustal recycling rates on Venus. Estimated rates are largely determined by (1) strain rate at the crust/mantle interface (higher strain rate leads to greater entrainment); and (2) effective viscosity of the lower crust (viscosity closer to that of mantle lithosphere leads to greater entrainment). Reasonable geologic strain rates and available crustal flow laws suggest entrainment can recycle approximately equal 1 cu km of crust per year under favorable conditions.
Estimation of daily mean air temperature from satellite derived radiometric data
NASA Technical Reports Server (NTRS)
Phinney, D.
1976-01-01
The Screwworm Eradication Data System (SEDS) at JSC utilizes satellite derived estimates of daily mean air temperature (DMAT) to monitor the effect of temperature on screwworm populations. The performance of the SEDS screwworm growth potential predictions depends in large part upon the accuracy of the DMAT estimates.
High Heat Flow from Enceladus' South Polar Region Measured using 10-600/cm(exp -1) Cassini/CIRS Data
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Spencer, J. R.; Pearl, J.; Segura, M.
2011-01-01
Analysis of 2008 Cassini Composite Infrared Spectrometer (CIRS) 10 to 600/cm thermal emission spectra of Enceladus shows that for reasonable assumptions about the spatial distribution of the emission and the thermophysical properties of the solar-heated background surface, which are supported by CIRS observations of background temperatures at the edge of the active region, the endogenic power of Enceladus' south polar terrain is 15.8 +/- 3.1 GW. This is significantly higher than the previous estimate of 5.8 +/- 1.9 GW. The new value represents an improvement over the previous one, which was derived from higher wave number data (600 to 1100/cm-I) and was thus only sensitive to high-temperature emission. The mechanism capable of producing such a high endogenic power remains a mystery and challenges the current models of proposed heat production.
High carbon losses due to recent cropland expansion in the United States
NASA Astrophysics Data System (ADS)
Spawn, S.; Lark, T.; Gibbs, H.
2017-12-01
Land conversion for agriculture in the United States has reached record highs in recent years. From 2008 to 2012 nearly 30,000 square kilometers of previously un-cultivated land were converted to agricultural land use with much of this expansion occurring on grasslands (77%) and shrublands (8%). To understand the effects of this conversion on global C cycling, we created novel, spatially explicit biomass maps for these biomes by combining existing satellite data products with models derived from field measurements. We then estimated changes in existing C stocks by combining our derived data with existing Landsat-scale data on land cover, land conversion, forest biomass and soil organic carbon (C) stocks. We find that conversion results in annual C losses of approximately 25 Tg C from US terrestrial ecosystems. Nationwide, roughly 80% of total emissions result from committed soil organic C losses. While biomass losses from expansion into forests and wetlands are disproportionately high per unit area, the vast majority of C losses occurred in grassland ecosystems, with grassland roots representing close to 70% of total biomass losses across all biomes. C losses are partially offset each year by agricultural abandonment which we estimate could sequester as much as 15 Tg C, annually. Taken together, we find that US agricultural expansion results in net annual emissions of 10 Tg C which is nearly 30% of emissions from existing US croplands. Our estimate is comparable to a recent analogous estimate for conversion of the Brazilian Cerrado and is equivalent to 10% of annual C losses from pantropical deforestation, suggesting that the effects of US cropland expansion could be globally significant.
Biggs, Holly M.; Hertz, Julian T.; Munishi, O. Michael; Galloway, Renee L.; Marks, Florian; Saganda, Wilbrod; Maro, Venance P.; Crump, John A.
2013-01-01
Background The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania. Methodology/Principal Findings We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14%) of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75–102 cases per 100,000 persons annually. Conclusions/Significance We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings. PMID:24340122
NASA Astrophysics Data System (ADS)
Henn, Brian; Painter, Thomas H.; Bormann, Kat J.; McGurk, Bruce; Flint, Alan L.; Flint, Lorraine E.; White, Vince; Lundquist, Jessica D.
2018-02-01
Hydrologic variables such as evapotranspiration (ET) and soil water storage are difficult to observe across spatial scales in complex terrain. Streamflow and lidar-derived snow observations provide information about distributed hydrologic processes such as snowmelt, infiltration, and storage. We use a distributed streamflow data set across eight basins in the upper Tuolumne River region of Yosemite National Park in the Sierra Nevada mountain range, and the NASA Airborne Snow Observatory (ASO) lidar-derived snow data set over 3 years (2013-2015) during a prolonged drought in California, to estimate basin-scale water balance components. We compare snowmelt and cumulative precipitation over periods from the ASO flight to the end of the water year against cumulative streamflow observations. The basin water balance residual term (snow melt plus precipitation minus streamflow) is calculated for each basin and year. Using soil moisture observations and hydrologic model simulations, we show that the residual term represents short-term changes in basin water storage over the snowmelt season, but that over the period from peak snow water equivalent (SWE) to the end of summer, it represents cumulative basin-mean ET. Warm-season ET estimated from this approach is 168 (85-252 at 95% confidence), 162 (0-326) and 191 (48-334) mm averaged across the basins in 2013, 2014, and 2015, respectively. These values are lower than previous full-year and point ET estimates in the Sierra Nevada, potentially reflecting reduced ET during drought, the effects of spatial variability, and the part-year time period. Using streamflow and ASO snow observations, we quantify spatially-distributed hydrologic processes otherwise difficult to observe.
Brenner, Hermann; Altenhofen, Lutz; Stock, Christian; Hoffmeister, Michael
2014-09-01
Most colorectal cancers develop from adenomas. We aimed to estimate sex- and age-specific incidence rates of colorectal adenomas and to assess their potential implications for colorectal cancer screening strategies. Sex- and age-specific incidence rates of colorectal adenomas were derived by a birth cohort analysis using data from 4,322,085 screening colonoscopies conducted in Germany and recorded in a national database in 2003-2012. In addition, cumulative risks of colorectal cancer among colonoscopically neoplasm-free men and women were estimated by combining adenoma incidence rates with previously derived adenoma-colorectal cancer transition rates. Estimated annual incidence in percentage (95% confidence interval) in age groups 55-59, 60-64, 65-69, 70-74, and 75-79 was 2.4 (2.2-2.6), 2.3 (2.1-2.6), 2.4 (2.1-2.6), 2.2 (1.8-2.5), and 1.8 (1.2-2.3) among men, and 1.4 (1.3-1.5), 1.5 (1.4-1.7), 1.6 (1.4-1.8), 1.6 (1.3-1.8), and 1.2 (0.8-1.6) among women. Estimated 10- and 15-year risks of clinically manifest colorectal cancer were 0.1% and 0.5% or lower, respectively, in all groups assessed. Annual incidence rates of colorectal adenomas are below 2.5% and 2% among men and women, respectively, and show little variation by age. Risk of clinically manifest colorectal cancer is expected to be very small within 10 years and beyond after negative colonoscopy for men and women at all ages. The use of rescreening after a negative screening colonoscopy above 60 years of age may be very limited. ©2014 American Association for Cancer Research.