Sample records for estimate source strength

  1. Strength estimation of a moving 125Iodine source during implantation in brachytherapy: application to linked sources.

    PubMed

    Tanaka, Kenichi; Endo, Satoru; Tateoka, Kunihiko; Asanuma, Osamu; Hori, Masakazu; Takagi, Masaru; Bengua, Gerard; Kamo, Ken-Ichi; Sato, Kaori; Takeda, Hiromitsu; Hareyama, Masato; Sakata, Koh-Ichi; Takada, Jun

    2014-11-01

    This study sought to demonstrate the feasibility of estimating the source strength during implantation in brachytherapy. The requirement for measuring the strengths of the linked sources was investigated. The utilized sources were (125)I with air kerma strengths of 8.38-8.63 U (μGy m(2) h(-1)). Measurements were performed with a plastic scintillator (80 mm × 50 mm × 20 mm in thickness). For a source-to-source distance of 10.5 mm and at source speeds of up to 200 mm s(-1), a counting time of 10 ms and a detector-to-needle distance of 5 mm were found to be the appropriate measurement conditions. The combined standard uncertainty (CSU) with the coverage factor of 1 (k = 1) was ∼15% when using a grid to decrease the interference by the neighboring sources. Without the grid, the CSU (k = 1) was ∼5%, and an 8% overestimation due to the neighboring sources was found to potentially cause additional uncertainty. In order to improve the accuracy in estimating source strength, it is recommended that the measurment conditions should be optimized by considering the tradeoff between the overestimation due to the neighboring sources and the intensity of the measured value, which influences the random error. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  2. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  3. Assessment of an apparent relationship between availability of soluble carbohydrates and reduced nitrogen during floral initiation in tobacco

    NASA Technical Reports Server (NTRS)

    Raper, C. D. Jr; Thomas, J. F.; Tolley-Henry, L.; Rideout, J. W.; Raper CD, J. r. (Principal Investigator)

    1988-01-01

    Daily relative accumulation rate of soluble carbohydrates (RARS) and reduced nitrogen (RARN) in the shoot, as estimates of source strength, were compared with daily relative growth rates (RGR) of the shoot, as an estimate of sink demand, during floral transformation in apical meristems of tobacco (Nicotiana tabacum 'NC 2326') grown at day/night temperatures of 18/14, 22/18, 26/22, 30/26, and 34/30 C. Source strength was assumed to exceed sink demand for either carbohydrates or nitrogen when the ratio of RARS/RGR or RARN/RGR was greater than unity, and sink demand was assumed to exceed source strength when the ratio was less than unity. Time of floral initiation, which was delayed up to 21 days with increases in temperature over the experimental range, was associated with intervals in which source strength of either carbohydrate or nitrogen exceeded sink demand, while sink demand for the other exceeded source strength. Floral initiation was not observed during intervals in which source strengths of both carbohydrates and nitrogen were greater than or less than sink demand. These results indicate that floral initiation is responsive to an imbalance in the relative availabilities of carbohydrate and nitrogen.

  4. An in vitro verification of strength estimation for moving an 125I source during implantation in brachytherapy.

    PubMed

    Tanaka, Kenichi; Kajimoto, Tsuyoshi; Hayashi, Takahiro; Asanuma, Osamu; Hori, Masakazu; Kamo, Ken-Ichi; Sumida, Iori; Takahashi, Yutaka; Tateoka, Kunihiko; Bengua, Gerard; Sakata, Koh-Ichi; Endo, Satoru

    2018-04-11

    This study aims to demonstrate the feasibility of a method for estimating the strength of a moving brachytherapy source during implantation in a patient. Experiments were performed under the same conditions as in the actual treatment, except for one point that the source was not implanted into a patient. The brachytherapy source selected for this study was 125I with an air kerma strength of 0.332 U (μGym2h-1), and the detector used was a plastic scintillator with dimensions of 10 cm × 5 cm × 5 cm. A calibration factor to convert the counting rate of the detector to the source strength was measured and then the accuracy of the proposed method was investigated for a manually driven source. The accuracy was found to be under 10% when the shielding effect of additional needles for implantation at other positions was corrected, and about 30% when the shielding was not corrected. Even without shielding correction, the proposed method can detect dead/dropped source, implantation of a source with the wrong strength, and a mistake in the number of the sources implanted. Furthermore, when the correction was applied, the achieved accuracy came close to within 7% required to find the Oncoseed 6711 (125I seed with unintended strength among the commercially supplied values of 0.392, 0.462 and 0.533 U).

  5. Ionic strength and DOC determinations from various freshwater sources to the San Francisco Bay

    USGS Publications Warehouse

    Hunter, Y.R.; Kuwabara, J.S.

    1994-01-01

    An exact estimation of dissolved organic carbon (DOC) within the salinity gradient of zinc and copper metals is significant in understanding the limit to which DOC could influence metal speciation. A low-temperature persulfate/oxygen/ultraviolet wet oxidation procedure was utilized for analyzing DOC samples adapted for ionic strength from major freshwater sources of the northern and southern regions of San Francisco Bay. The ionic strength of samples was modified with a chemically defined seawater medium up to 0.7M. Based on the results, a minimum effect of ionic strength on oxidation proficiency for DOC sources to the Bay over an ionic strength gradient of 0.0 to 0.7 M was observed. There was no major impacts of ionic strength on two Suwanee River fulvic acids. In general, the noted effects associated with ionic strength were smaller than the variances seen in the aquatic environment between high- and low-temperature methods.

  6. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  7. Global atmospheric concentrations and source strength of ethane

    NASA Technical Reports Server (NTRS)

    Blake, D. R.; Rowland, F. S.

    1986-01-01

    A study of the variation in ethane (C2H6) concentration between northern and southern latitudes over three years is presented together with a new estimate of its source strength. Ethane concentrations vary from 0.07 to 2 p.p.b.v. (parts per billion by volume) in air samples collected in remote surface locations in the Pacific (latitude 71 N-47 S) in all four seasons between September 1984 and June 1985. The variations are consistent with southerly transport from sources located chiefly in the Northern Hemisphere, further modified by seasonal variations in the strength of the reaction of C2H6 with OH radicals. These global data can be combined with concurrent data for CH4 and the laboratory reaction rates of each with OH to provide an estimate of three months as the average atmospheric lifetime for C2H6 and 13 + or - 3 Mtons for its annual atmospheric release.

  8. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  9. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  10. Noise Strength Estimates of Three SGRs: Swift J1822.3-1606, SGR J1833-0832 and Swift J1834.9-0846

    NASA Astrophysics Data System (ADS)

    Serim, M. M.; Inam, S. Ç.; Baykal, A.

    2012-12-01

    We studied timing solutions of the three magnetars SWIFT J1822.3-1606, SGR J1833-0832 and Swift J1834.9-0846. We extracted the residuals of pulse arrival times with respect to the constant pulse frequency derivative. Using polynomial estimator techniques, we estimated the noise strengths of the sources. Our results showed that the noise strength and spin-down rate are strongly correlated, indicating that increase in spin-down rate leads to more torque noise on the magnetars. We are in progress of extending our analysis to the other magnetars.

  11. Contribution of Changing Sources and Sinks to the Growth Rate of Atmospheric Methane Concentrations for the Last Two Decades

    NASA Technical Reports Server (NTRS)

    Matthews, Elaine; Walter, B.; Bogner, J.; Sarma, D.; Portney, B.; Hansen, James (Technical Monitor)

    2000-01-01

    In situ measurements of atmospheric methane concentrations begun in the early 1980s show decadal trends, as well as large interannual variations, in growth rate. Recent research indicates that while wetlands can explain several of the large growth anomalies for individual years, the decadal trend may be the combined effect of increasing sinks, due to increases in tropospheric OH, and stabilizing sources. We discuss new 20-year histories of annual, global source strengths for all major methane sources, i.e., natural wetlands, rice cultivation, ruminant animals, landfills, fossil fuels, and biomass burning, and present estimates of the temporal pattern of the sink required to reconcile these sources and atmospheric concentrations over the time period. Analysis of the individual emission sources, together with model-derived estimates of the OH sink strength, indicates that the growth rate of atmospheric methane observed over the last 20 years can only be explained by a combination of changes in source emissions and an increasing tropospheric sink.

  12. Vapor Intrusion Estimation Tool for Unsaturated Zone Contaminant Sources. User’s Guide

    DTIC Science & Technology

    2016-08-30

    324449 Page Intentionally Left Blank iii Executive Summary Soil vapor extraction (SVE) is a prevalent remediation approach for volatile contaminants...strength and location, vadose zone transport, and a model for estimating movement of soil -gas vapor contamination into buildings. The tool may be...framework for estimating the impact of a vadose zone contaminant source on soil gas concentrations and vapor intrusion into a building

  13. Cancer Related-Knowledge - Small Area Estimates

    Cancer.gov

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  14. RAiSE III: 3C radio AGN energetics and composition

    NASA Astrophysics Data System (ADS)

    Turner, Ross J.; Shabala, Stanislav S.; Krause, Martin G. H.

    2018-03-01

    Kinetic jet power estimates based exclusively on observed monochromatic radio luminosities are highly uncertain due to confounding variables and a lack of knowledge about some aspects of the physics of active galactic nuclei (AGNs). We propose a new methodology to calculate the jet powers of the largest, most powerful radio sources based on combinations of their size, lobe luminosity, and shape of their radio spectrum; this approach avoids the uncertainties encountered by previous relationships. The outputs of our model are calibrated using hydrodynamical simulations and tested against independent X-ray inverse-Compton measurements. The jet powers and lobe magnetic field strengths of radio sources are found to be recovered using solely the lobe luminosity and spectral curvature, enabling the intrinsic properties of unresolved high-redshift sources to be inferred. By contrast, the radio source ages cannot be estimated without knowledge of the lobe volumes. The monochromatic lobe luminosity alone is incapable of accurately estimating the jet power or source age without knowledge of the lobe magnetic field strength and size, respectively. We find that, on average, the lobes of the Third Cambridge Catalogue of Radio Sources (3C) have magnetic field strengths approximately a factor three lower than the equipartition value, inconsistent with equal energy in the particles and the fields at the 5σ level. The particle content of 3C radio lobes is discussed in the context of complementary observations; we do not find evidence favouring an energetically dominant proton population.

  15. A Shock-Refracted Acoustic Wave Model for Screech Amplitude in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2007-01-01

    A physical model is proposed for the estimation of the screech amplitude in underexpanded supersonic jets. The model is based on the hypothesis that the interaction of a plane acoustic wave with stationary shock waves provides amplification of the transmitted acoustic wave upon traversing the shock. Powell's discrete source model for screech incorporating a stationary array of acoustic monopoles is extended to accommodate variable source strength. The proposed model reveals that the acoustic sources are of increasing strength with downstream distance. It is shown that the screech amplitude increases with the fully expanded jet Mach number. Comparisons of predicted screech amplitude with available test data show satisfactory agreement. The effect of variable source strength on the directivity of the fundamental (first harmonic, lowest frequency mode) and the second harmonic (overtone) is found to be unimportant with regard to the principal lobe (main or major lobe) of considerable relative strength, and is appreciable only in the secondary or minor lobes (of relatively weaker strength).

  16. A Shock-Refracted Acoustic Wave Model for the Prediction of Screech Amplitude in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2007-01-01

    A physical model is proposed for the estimation of the screech amplitude in underexpanded supersonic jets. The model is based on the hypothesis that the interaction of a plane acoustic wave with stationary shock waves provides amplification of the transmitted acoustic wave upon traversing the shock. Powell's discrete source model for screech incorporating a stationary array of acoustic monopoles is extended to accommodate variable source strength. The proposed model reveals that the acoustic sources are of increasing strength with downstream distance. It is shown that the screech amplitude increases with the fuiiy expanded jet Mach number. Comparisons of predicted screech amplitude with available test data show satisfactory agreement. The effect of variable source strength on directivity of the fundamental (first harmonic, lowest frequency mode) and the second harmonic (overtone) is found to be unimportant with regard to the principal lobe (main or major lobe) of considerable relative strength, and is appreciable only in the secondary or minor lobes (of relatively weaker strength

  17. Application and evaluation of a rapid response earthquake-triggered landslide model to the 25 April 2015 Mw 7.8 Gorkha earthquake, Nepal

    USGS Publications Warehouse

    Gallen, Sean F.; Clark, Marin K.; Godt, Jonathan W.; Roback, Kevin; Niemi, Nathan A

    2017-01-01

    The 25 April 2015 Mw 7.8 Gorkha earthquake produced strong ground motions across an approximately 250 km by 100 km swath in central Nepal. To assist disaster response activities, we modified an existing earthquake-triggered landslide model based on a Newmark sliding block analysis to estimate the extent and intensity of landsliding and landslide dam hazard. Landslide hazard maps were produced using Shuttle Radar Topography Mission (SRTM) digital topography, peak ground acceleration (PGA) information from the U.S. Geological Survey (USGS) ShakeMap program, and assumptions about the regional rock strength based on end-member values from previous studies. The instrumental record of seismicity in Nepal is poor, so PGA estimates were based on empirical Ground Motion Prediction Equations (GMPEs) constrained by teleseismic data and felt reports. We demonstrate a non-linear dependence of modeled landsliding on aggregate rock strength, where the number of landslides decreases exponentially with increasing rock strength. Model estimates are less sensitive to PGA at steep slopes (> 60°) compared to moderate slopes (30–60°). We compare forward model results to an inventory of landslides triggered by the Gorkha earthquake. We show that moderate rock strength inputs over estimate landsliding in regions beyond the main slip patch, which may in part be related to poorly constrained PGA estimates for this event at far distances from the source area. Directly above the main slip patch, however, the moderate strength model accurately estimates the total number of landslides within the resolution of the model (landslides ≥ 0.0162 km2; observed n = 2214, modeled n = 2987), but the pattern of landsliding differs from observations. This discrepancy is likely due to the unaccounted for effects of variable material strength and local topographic amplification of strong ground motion, as well as other simplifying assumptions about source characteristics and their relationship to landsliding.

  18. Source strength verification and quality assurance of preloaded brachytherapy needles using a CMOS flat panel detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golshan, Maryam, E-mail: maryam.golshan@bccancer.bc.ca; Spadinger, Ingrid; Chng, Nick

    2016-06-15

    Purpose: Current methods of low dose rate brachytherapy source strength verification for sources preloaded into needles consist of either assaying a small number of seeds from a separate sample belonging to the same lot used to load the needles or performing batch assays of a subset of the preloaded seed trains. Both of these methods are cumbersome and have the limitations inherent to sampling. The purpose of this work was to investigate an alternative approach that uses an image-based, autoradiographic system capable of the rapid and complete assay of all sources without compromising sterility. Methods: The system consists of amore » flat panel image detector, an autoclavable needle holder, and software to analyze the detected signals. The needle holder was designed to maintain a fixed vertical spacing between the needles and the image detector, and to collimate the emissions from each seed. It also provides a sterile barrier between the needles and the imager. The image detector has a sufficiently large image capture area to allow several needles to be analyzed simultaneously.Several tests were performed to assess the accuracy and reproducibility of source strengths obtained using this system. Three different seed models (Oncura 6711 and 9011 {sup 125}I seeds, and IsoAid Advantage {sup 103}Pd seeds) were used in the evaluations. Seeds were loaded into trains with at least 1 cm spacing. Results: Using our system, it was possible to obtain linear calibration curves with coverage factor k = 1 prediction intervals of less than ±2% near the centre of their range for the three source models. The uncertainty budget calculated from a combination of type A and type B estimates of potential sources of error was somewhat larger, yielding (k = 1) combined uncertainties for individual seed readings of 6.2% for {sup 125}I 6711 seeds, 4.7% for {sup 125}I 9011 seeds, and 11.0% for Advantage {sup 103}Pd seeds. Conclusions: This study showed that a flat panel detector dosimetry system is a viable option for source strength verification in preloaded needles, as it is capable of measuring all of the sources intended for implantation. Such a system has the potential to directly and efficiently estimate individual source strengths, the overall mean source strength, and the positions within the seed-spacer train.« less

  19. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  20. Statistical interpretation of pollution data from satellites. [for levels distribution over metropolitan area

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Green, R. N.; Young, G. R.

    1974-01-01

    The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.

  1. IR spectroscopy as a source of data on bond strengths

    NASA Astrophysics Data System (ADS)

    Finkelshtein, E. I.; Shamsiev, R. S.

    2018-02-01

    The aim of this work is the estimation of double bond strength, namely Cdbnd O bonds in ketones and aldehydes and Cdbnd C bonds in various compounds. By the breaking of these bonds one or both fragments formed are carbenes, for which experimental data on the enthalpies of formation (ΔHf298) are scarce. Thus for the estimation of ΔHf298 of the corresponding carbenes, the empirical equations were proposed based on different approximations. In addition, a quantum chemical calculations of the ΔHf298 values of carbenes were performed, and the data obtained were compared with experimental values and the results of earlier calculations. Equations for the calculation of Cdbnd O bond strengths of different ketones and aldehydes from the corresponding stretching frequencies ν(Cdbnd O) were derived. Using the proposed equations, the strengths of Cdbnd O bonds of 25 ketones and 12 conjugated aldehydes, as well as Cdbnd C bonds of 13 hydrocarbons and 7 conjugated aldehydes were estimated for the first time. Linear correlations of Cdbnd C and Cdbnd O bond strengths with the bond lengths were established, and the equations permitting the estimation of the double bond strengths and lengths with acceptable accuracy were obtained. Also, the strength of central Cdbnd C bond of stilbene was calculated for the first time. The uncertainty of the strengths of double bonds obtained may be regarded as accurate ±10-15 kJ/mol.

  2. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    NASA Astrophysics Data System (ADS)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.

    2015-10-01

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.

  3. Bayesian characterization of uncertainty in species interaction strengths.

    PubMed

    Wolf, Christopher; Novak, Mark; Gitelman, Alix I

    2017-06-01

    Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.

  4. Acoustic source localization in mixed field using spherical microphone arrays

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Wang, Tong

    2014-12-01

    Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.

  5. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less

  6. Two-dimensional grid-free compressive beamforming.

    PubMed

    Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli

    2017-08-01

    Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.

  7. Investigation of the physical scaling of sea spray spume droplet production

    NASA Astrophysics Data System (ADS)

    Fairall, C. W.; Banner, M. L.; Peirson, W. L.; Asher, W.; Morison, R. P.

    2009-10-01

    In this paper we report on a laboratory study, the Spray Production and Dynamics Experiment (SPANDEX), conducted at the University of New South Wales Water Research Laboratory in Australia. The goals of SPANDEX were to illuminate physical aspects of spume droplet production and dispersion; verify theoretical simplifications used to estimate the source function from ambient droplet concentration measurements; and examine the relationship between the implied source strength and forcing parameters such as wind speed, surface turbulent stress, and wave properties. Observations of droplet profiles give reasonable confirmation of the basic power law profile relationship that is commonly used to relate droplet concentrations to the surface source strength. This essentially confirms that, even in a wind tunnel, there is a near balance between droplet production and removal by gravitational settling. The observations also indicate considerable droplet mass may be present for sizes larger than 1.5 mm diameter. Phase Doppler Anemometry observations revealed significant mean horizontal and vertical slip velocities that were larger closer to the surface. The magnitude seems too large to be an acceleration time scale effect. Scaling of the droplet production surface source strength proved to be difficult. The wind speed forcing varied only 23% and the stress increased a factor of 2.2. Yet, the source strength increased by about a factor of 7. We related this to an estimate of surface wave energy flux through calculations of the standard deviation of small-scale water surface disturbance, a wave-stress parameterization, and numerical wave model simulations. This energy index only increased by a factor of 2.3 with the wind forcing. Nonetheless, a graph of spray mass surface flux versus surface disturbance energy is quasi-linear with a substantial threshold.

  8. Boreal forest soil erosion and soil-atmosphere carbon exchange

    NASA Astrophysics Data System (ADS)

    Billings, S. A.; Harden, J. W.; O'Donnell, J.; Sierra, C. A.

    2013-12-01

    Erosion may become an increasingly important agent of change in boreal systems with climate warming, due to enhanced ice wedge degradation and increases in the frequency and intensity of stand-replacing fires. Ice wedge degradation can induce ground surface subsidence and lateral movement of mineral soil downslope, and fire can result in the loss of O horizons and live roots, with associated increases in wind- and water-promoted erosion until vegetation re-establishment. It is well-established that soil erosion can induce significant atmospheric carbon (C) source and sink terms, with the strength of these terms dependent on the fate of eroded soil organic carbon (SOC) and the extent to which SOC oxidation and production characteristics change with erosion. In spite of the large SOC stocks in the boreal system and the high probability that boreal soil profiles will experience enhanced erosion in the coming decades, no one has estimated the influence of boreal erosion on the atmospheric C budget, a phenomenon that can serve as a positive or negative feedback to climate. We employed an interactive erosion model that permits the user to define 1) profile characteristics, 2) the erosion rate, and 3) the extent to which each soil layer at an eroding site retains its pre-erosion SOC oxidation and production rates (nox and nprod=0, respectively) vs. adopts the oxidation and production rates of previous, non-eroded soil layers (nox and nprod=1, respectively). We parameterized the model using soil profile characteristics observed at a recently burned site in interior Alaska (Hess Creek), defining SOC content and turnover times. We computed the degree to which post-burn erosion of mineral soil generates an atmospheric C sink or source while varying erosion rates and assigning multiple values of nox and nprod between 0 and 1, providing insight into the influence of erosion rate, SOC oxidation, and SOC production on C dynamics in this and similar profiles. Varying nox and nprod did not induce meaningful changes in model estimates of atmospheric C source or sink strength, likely due to the low turnover rate of SOC in this system. However, variation in mineral soil erosion rates induced large shifts in the source and sink strengths for atmospheric C; after 50 y of mineral soil erosion at 5 cm y-1, we observed a maximum C source of 35 kg C m-2 and negligible sink strength. Doubling the erosion rate approximately doubled the source strength. Scaling these estimates to the region requires estimates of the area undergoing mineral soil erosion in forests similar to those modeled. We suggest that erosion is an important but little studied feature of fire-driven boreal systems that will influence atmospheric CO2 budgets.

  9. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Emmons, L. K.; Mak, J. E.

    2007-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year- simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  10. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Mak, J. E.; Emmons, L. K.

    2008-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year-simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  11. Source Parameter Estimation using the Second-order Closure Integrated Puff Model

    DTIC Science & Technology

    The sensor measurements are categorized as triggered and non-triggered based on the recorded concentration measurements and a threshold...concentration value. Using each measured value, sources of adjoint material are created from the triggered and non-triggered sensors, and the adjoint transport...equations are solved to predict the adjoint concentration fields. The adjoint source strength is inversely proportional to the concentration measurement

  12. Earthquake source properties from pseudotachylite

    USGS Publications Warehouse

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow, pseudotachylite is not representative of the shear strength during the earthquake that generated it, or unless the strength excess is larger than we have allowed.

  13. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  14. The Width of a Solar Coronal Mass Ejection and the Source of the Driving Magnetic Explosion

    NASA Technical Reports Server (NTRS)

    Moore, Ronald L.; Sterling, Alphonse C.; Suess, Steven T.

    2007-01-01

    We show that the strength of the magnetic field in the area covered by the flare arcade following a CME-producing ejective solar eruption can be estimated from the final angular width of the CME in the outer corona and the final angular width of the flare arcade. We assume (1) the flux-rope plasmoid ejected from the flare site becomes the interior of the CME plasmoid, (2) in the outer corona (R greater than 2R(sub Sun)) the CME is roughly a spherical plasmoid with legs shaped like a light bulb, and (3) beyond some height in or below the outer corona the CME plasmoid is in lateral pressure balance with the surrounding magnetic field. The strength of the nearly radial magnetic field in the outer corona is estimated from the radial component of the interplanetary magnetic field measured by Ulysses. We apply this model to three well-observed CMEs that exploded from flare regions of extremely different size and magnetic setting. One of these CMEs is an over-and-out CME that exploded from a laterally far offset compact ejective flare. In each event, the estimated source-region field strength is appropriate for the magnetic setting of the flare. This agreement (1) indicates that CMEs are propelled by the magnetic field of the CME plasmoid pushing against the surrounding magnetic field, (2) supports the magnetic-arch-blowout scenario for over-and-out CMEs, and (3) shows that a CME s final angular width in the outer corona can be estimated from the amount of magnetic flux covered by the source-region flare arcade.

  15. Grip Strength as an Indicator of Health-Related Quality of Life in Old Age-A Pilot Study.

    PubMed

    Musalek, Christina; Kirchengast, Sylvia

    2017-11-24

    Over the last century life expectancy has increased dramatically nearly all over the world. This dramatic absolute and relative increase of the old aged people component of the population has influenced not only population structure but also has dramatic implications for the individuals and public health services. The aim of the present pilot study was to examine the impact of physical well-being assessed by hand grip strength and social factors estimated by social contact frequency on health-related quality of life among 22 men and 41 women ranging in age between 60 and 94 years. Physical well-being was estimated by hand grip strength, data concerning subjective wellbeing and health related quality of life were collected by personal interviews based on the WHOQOL-BREF questionnaires. Number of offspring and intergenerational contacts were not related significantly to health-related quality of life, while social contacts with non-relatives and hand grip strength in contrast had a significant positive impact on health related quality of life among old aged men and women. Physical well-being and in particular muscle strength-estimated by grip strength-may increase health-related quality of life and is therefore an important source for well-being during old age. Grip strength may be used as an indicator of health-related quality of life.

  16. SP Response to a Line Source Infiltration for Characterizing the Vadose Zone: Forward Modeling and Inversion

    NASA Astrophysics Data System (ADS)

    Sailhac, P.

    2004-05-01

    Field estimation of soil water flux has direct application for water resource management. Standard hydrologic methods like tensiometry or TDR are often difficult to apply because of the heterogeneity of the subsurface, and non invasive tools like ERT, NMR or GPR are limited to the estimation of the water content. Electrical Streaming Potential (SP) monitoring can provide a cost-effective tool to help estimate the nature of the hydraulic transfers (infiltration or evaporation) in the vadose zone. Indeed this technique has improved during the last decade and has been shown to be a useful tool for quantitative groundwater flow characterization (see the poster of Marquis et al. for a review). We now account for our latest development on the possibility of using SP for estimating hydraulic parameters of unsaturated soils from in situ SP measurements during infiltration experiments. The proposed method consists in SP profiling perpendicularly to a line source of steady-state infiltration. Analytic expressions for the forward modeling show a sensitivity to six parameters: the electrokinetic coupling parameter at saturation CS, the soil sorptive number α , the ratio of the constant source strength to the hydraulic conductivity at saturation q/KS, the soil effective water saturation prior to the infiltration experiment Se0, Mualem parameter m, and Archie law exponent n. In applications, all these parameters could be constrained by inverting electrokinetic data obtained during a series of infiltration experiments with varying source strength q.

  17. The Width of a Solar Coronal Mass Ejection and the Source of the Driving Magnetic Explosion: A Test of the Standard Scenario for CME Production

    NASA Technical Reports Server (NTRS)

    Moore, Ronald L.; Sterling, Alphonse C.; Suess, Steven T.

    2007-01-01

    We show that the strength (B(sub F1are)) of the magnetic field in the area covered by the flare arcade following a CME-producing ejective solar eruption can be estimated from the final angular width (Final Theta(sub CME)) of the CME in the outer corona and the final angular width (Theta(sub Flare)) of the flare arcade: B(sub Flare) approx. equals 1.4[(Final Theta(sub CME)/Theta(sub Flare)] (exp 2)G. We assume (1) the flux-rope plasmoid ejected from the flare site becomes the interior of the CME plasmoid; (2) in the outer corona (R > 2 (solar radius)) the CME is roughly a "spherical plasmoid with legs" shaped like a lightbulb; and (3) beyond some height in or below the outer corona the CME plasmoid is in lateral pressure balance with the surrounding magnetic field. The strength of the nearly radial magnetic field in the outer corona is estimated from the radial component of the interplanetary magnetic field measured by Ulysses. We apply this model to three well-observed CMEs that exploded from flare regions of extremely different size and magnetic setting. One of these CMEs was an over-and-out CME, that is, in the outer corona the CME was laterally far offset from the flare-marked source of the driving magnetic explosion. In each event, the estimated source-region field strength is appropriate for the magnetic setting of the flare. This agreement (1) indicates that CMEs are propelled by the magnetic field of the CME plasmoid pushing against the surrounding magnetic field; (2) supports the magnetic-arch-blowout scenario for over-and-out CMEs; and (3) shows that a CME's final angular width in the outer corona can be estimated from the amount of magnetic flux covered by the source-region flare arcade.

  18. Contribution of Changing Sources and Sinks to the Growth Rate of Atmospheric Methane Concentrations for the Last Two Decades

    NASA Technical Reports Server (NTRS)

    Matthews, Elaine; Walter, B.; Bogner, J.; Sarma, D.; Portmey, G.; Travis, Larry (Technical Monitor)

    2001-01-01

    In situ measurements of atmospheric methane concentrations begun in the early 1980s show decadal trends, as well as large interannual variations, in growth rate. Recent research indicates that while wetlands can explain several of the large growth anomalies for individual years, the decadal trend may be the combined effect of increasing sinks, due to increases in tropospheric OH, and stabilizing sources. We discuss new 20-year histories of annual, global source strengths for all major methane sources, i.e., natural wetlands, rice cultivation, ruminant animals, landfills, fossil fuels, and biomass burning. We also present estimates of the temporal pattern of the sink required to reconcile these sources and atmospheric concentrations over this time period. Analysis of the individual emission sources, together with model-derived estimates of the OH sink strength, indicates that the growth rate of atmospheric methane observed over the last 20 years can only be explained by a combination of changes in source emissions and an increasing tropospheric sink. Direct validation of the global sources and the terrestrial sink is not straightforward, in part because some sources/sinks are relatively small and diffuse (e.g., landfills and soil consumption), as well as because the atmospheric record integrates multiple and substantial sources and tropospheric sinks in regions such as the tropics. We discuss ways to develop and test criteria for rejecting and/or accepting a suite of scenarios for the methane budget.

  19. Continental sources of halocarbons and nitrous oxide

    NASA Technical Reports Server (NTRS)

    Prather, M. J.

    1985-01-01

    Estimates of continental sources of CFC-11, CFC-12, CCl4, CH3CCl3 and N2O are derived from the atmospheric lifetime experiment in Adrigole, Ireland, and anthropogenic emissions of CCl4 and N2O from Europe have been identified. Relative source strengths are consistent with global budgets for the halocarbons and N2O. Different industrial release patterns for halocarbons are observed for Europe, the western United States and Australia.

  20. Determining Source Strength of Semivolatile Organic Compounds using Measured Concentrations in Indoor Dust

    PubMed Central

    Shin, Hyeong-Moo; McKone, Thomas E.; Nishioka, Marcia G.; Fallin, M. Daniele; Croen, Lisa A.; Hertz-Picciotto, Irva; Newschaffer, Craig J.; Bennett, Deborah H.

    2014-01-01

    Consumer products and building materials emit a number of semivolatile organic compounds (SVOCs) in the indoor environment. Because indoor SVOCs accumulate in dust, we explore the use of dust to determine source strength and report here on analysis of dust samples collected in 30 U.S. homes for six phthalates, four personal care product ingredients, and five flame retardants. We then use a fugacity-based indoor mass-balance model to estimate the whole house emission rates of SVOCs that would account for the measured dust concentrations. Di-2-ethylhexyl phthalate (DEHP) and di-iso-nonyl phthalate (DiNP) were the most abundant compounds in these dust samples. On the other hand, the estimated emission rate of diethyl phthalate (DEP) is the largest among phthalates, although its dust concentration is over two orders of magnitude smaller than DEHP and DiNP. The magnitude of the estimated emission rate that corresponds to the measured dust concentration is found to be inversely correlated with the vapor pressure of the compound, indicating that dust concentrations alone cannot be used to determine which compounds have the greatest emission rates. The combined dust-assay modeling approach shows promise for estimating indoor emission rates for SVOCs. PMID:24118221

  1. The Width of a CME and the Source of the Driving Magnetic Explosion

    NASA Technical Reports Server (NTRS)

    Moore, R. L.; Sterling, A. C.; Suess, S. T.

    2007-01-01

    We show that the strength of the magnetic field in the area covered by the flare arcade following a CME-producing ejective solar eruption can be estimated from the final angular width of the CME in the outer corona and the final angular width of the flare arcade. We assume (1) the flux-rope plasmoid ejected from the flare site becomes the interior of the CME plasmoid, (2) in the outer corona the CME is roughly a "spherical plasmoid with legs" shaped like a light bulb, and (3) beyond some height in or below the outer corona the CME plasmoid is in lateral pressure balance with the surrounding magnetic field. The strength of the nearly radial magnetic field in the outer corona is estimated from the radial component of the interplanetary magnetic field measured by Ulysses. We apply this model to three well-observed CMEs that exploded from flare regions of extremely different size and magnetic setting. In each event, the estimated source-region field strength is appropriate for the magnetic setting of the flare. This agreement indicates via the model that CMEs (1) are propelled by the magnetic field of the CME plasmoid pushing against the surrounding magnetic field, and (2) can explode from flare regions that are laterally far offset from the radial path of the CME in the outer corona.

  2. Human Systems Integration (HSI) in Acquisition. HSI Domain Guide

    DTIC Science & Technology

    2009-08-01

    job simulation that includes posture data , force parameters, and anthropometry . Output includes the percentage of men and women who have the strength...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of

  3. Temporal Changes in Stress Drop, Frictional Strength, and Earthquake Size Distribution in the 2011 Yamagata-Fukushima, NE Japan, Earthquake Swarm, Caused by Fluid Migration

    NASA Astrophysics Data System (ADS)

    Yoshida, Keisuke; Saito, Tatsuhiko; Urata, Yumi; Asano, Youichi; Hasegawa, Akira

    2017-12-01

    In this study, we investigated temporal variations in stress drop and b-value in the earthquake swarm that occurred at the Yamagata-Fukushima border, NE Japan, after the 2011 Tohoku-Oki earthquake. In this swarm, frictional strengths were estimated to have changed with time due to fluid diffusion. We first estimated the source spectra for 1,800 earthquakes with 2.0 ≤ MJMA < 3.0, by correcting the site-amplification and attenuation effects determined using both S waves and coda waves. We then determined corner frequency assuming the omega-square model and estimated stress drop for 1,693 earthquakes. We found that the estimated stress drops tended to have values of 1-4 MPa and that stress drops significantly changed with time. In particular, the estimated stress drops were very small at the beginning, and increased with time for 50 days. Similar temporal changes were obtained for b-value; the b-value was very high (b 2) at the beginning, and decreased with time, becoming approximately constant (b 1) after 50 days. Patterns of temporal changes in stress drop and b-value were similar to the patterns for frictional strength and earthquake occurrence rate, suggesting that the change in frictional strength due to migrating fluid not only triggered the swarm activity but also affected earthquake and seismicity characteristics. The estimated high Q-1 value, as well as the hypocenter migration, supports the presence of fluid, and its role in the generation and physical characteristics of the swarm.

  4. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Noise Source Identification in a Reverberant Field Using Spherical Beamforming

    NASA Astrophysics Data System (ADS)

    Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang

    Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.

  6. Acoustic Source Analysis of Magnetoacoustic Tomography With Magnetic Induction for Conductivity Gradual-Varying Tissues.

    PubMed

    Wang, Jiawei; Zhou, Yuqi; Sun, Xiaodong; Ma, Qingyu; Zhang, Dong

    2016-04-01

    As a multiphysics imaging approach, magnetoacoustic tomography with magnetic induction (MAT-MI) works on the physical mechanism of magnetic excitation, acoustic vibration, and transmission. Based on the theoretical analysis of the source vibration, numerical studies are conducted to simulate the pathological changes of tissues for a single-layer cylindrical conductivity gradual-varying model and estimate the strengths of sources inside the model. The results suggest that the inner source is generated by the product of the conductivity and the curl of the induced electric intensity inside conductivity homogeneous medium, while the boundary source is produced by the cross product of the gradient of conductivity and the induced electric intensity at conductivity boundary. For a biological tissue with low conductivity, the strength of boundary source is much higher than that of the inner source only when the size of conductivity transition zone is small. In this case, the tissue can be treated as a conductivity abrupt-varying model, ignoring the influence of inner source. Otherwise, the contributions of inner and boundary sources should be evaluated together quantitatively. This study provide basis for further study of precise image reconstruction of MAT-MI for pathological tissues.

  7. Importance and challenges of measuring intrinsic foot muscle strength

    PubMed Central

    2012-01-01

    Background Intrinsic foot muscle weakness has been implicated in a range of foot deformities and disorders. However, to establish a relationship between intrinsic muscle weakness and foot pathology, an objective measure of intrinsic muscle strength is needed. The aim of this review was to provide an overview of the anatomy and role of intrinsic foot muscles, implications of intrinsic weakness and evaluate the different methods used to measure intrinsic foot muscle strength. Method Literature was sourced from database searches of MEDLINE, PubMed, SCOPUS, Cochrane Library, PEDro and CINAHL up to June 2012. Results There is no widely accepted method of measuring intrinsic foot muscle strength. Methods to estimate toe flexor muscle strength include the paper grip test, plantar pressure, toe dynamometry, and the intrinsic positive test. Hand-held dynamometry has excellent interrater and intrarater reliability and limits toe curling, which is an action hypothesised to activate extrinsic toe flexor muscles. However, it is unclear whether any method can actually isolate intrinsic muscle strength. Also most methods measure only toe flexor strength and other actions such as toe extension and abduction have not been adequately assessed. Indirect methods to investigate intrinsic muscle structure and performance include CT, ultrasonography, MRI, EMG, and muscle biopsy. Indirect methods often discriminate between intrinsic and extrinsic muscles, but lack the ability to measure muscle force. Conclusions There are many challenges to accurately measure intrinsic muscle strength in isolation. Most studies have measured toe flexor strength as a surrogate measure of intrinsic muscle strength. Hand-held dynamometry appears to be a promising method of estimating intrinsic muscle strength. However, the contribution of extrinsic muscles cannot be excluded from toe flexor strength measurement. Future research should clarify the relative contribution of intrinsic and extrinsic muscles during intrinsic foot muscle strength testing. PMID:23181771

  8. Nitrogen oxides in the troposphere - Global and regional budgets

    NASA Technical Reports Server (NTRS)

    Logan, J. A.

    1983-01-01

    The cycle of nitrogen oxides in the troposphere is discussed from both global and regional perspectives. Global sources for NO(x) are estimated to be of magnitude 50 (+ or - 25) x 10 to the 12th gm N/yr. Nitrogen oxides are derived from combustion of fossil fuels (40 percent) and biomass burning (25 percent) with the balance from lightning and microbial activity in soils. Estimates for the rate of removal of NOx based on recent atmospheric and precipitation chemistry data are consistent with global source strengths derived here. Industrial and agricultural activities provide approximately two thirds of the global source for NOx. In North America, sources from combustion of fossil fuels exceed natural sources by a factor of 3-13. Wet deposition removes about one third of the combustion source of NOx over North America, while dry deposition removes a similar amount. The balance is exported from the continent. Deposition of nitrate in precipitation over eastern Canada and the western Atlantic is clearly influenced by sources of NOx in the eastern United States.

  9. Interpretation of the MEG-MUSIC scan in biomagnetic source localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1993-09-01

    MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less

  10. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  11. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  12. Kalman-filtered compressive sensing for high resolution estimation of anthropogenic greenhouse gas emissions from sparse measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia

    2013-09-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less

  13. Ambient ammonia and related amines in and around a mink production facility

    USDA-ARS?s Scientific Manuscript database

    In areas where ammonia is a significant air pollutant or nuisance concern, knowledge of all potential source locations and strengths is paramount. The USEPA’s 2014 National Emissions Inventory estimates that nearly 80% of the national ammonia emissions are attributable to the agricultural sector an...

  14. Improvement of isometric dorsiflexion protocol for assessment of tibialis anterior muscle strength.

    PubMed

    Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh

    2015-01-01

    It is important to accurately estimate the electromyogram (EMG)/force relationship of triceps surae (TS) muscle for detecting strength deficit of tibalis anterior (TA) muscle. In literature, the protocol for recording EMG and force of dorsiflexion have been described, and the necessity for immobilizing the ankle has been explained. However, there is a significant variability of the results among researchers even though they report the fixation of the ankle. We have determined that toe extension can cause significant variation in the dorsiflexion force and EMG of TS and this can occur despite following the current guidelines which require immobilizing the ankle. The results also show that there was a large increase in the variability of the force and the RMS of EMG of TS when the toes were not strapped compared with when they were strapped. Thus, with the current guidelines, where there are no instructions regarding the necessity of strapping the toes, the EMG/force relationship of TS could be incorrect and give an inaccurate assessment of the dorsiflexor TA strength. In summary, •Current methodology to estimate the dorsiflexor TA strength with respect to the TS activity, emphasizing on ankle immobilization is insufficient to prevent large variability in the measurements.•Toe extension during dorsiflexion was found to be one source of variability in estimating the TA strength.•It is recommended that guidelines for recording force and EMG from TA and TS muscles should require the strapping of the toes along with the need for immobilizing the ankle.

  15. An analysis of the carbon balance of the Arctic Basin from 1997 to 2006

    USGS Publications Warehouse

    McGuire, A.D.; Hayes, D.J.; Kicklighter, D.W.; Manizza, M.; Zhuang, Q.; Chen, M.; Follows, M.J.; Gurney, K.R.; McClelland, J.W.; Melillo, J.M.; Peterson, B.J.; Prinn, R.G.

    2010-01-01

    This study used several model-based tools to analyse the dynamics of the Arctic Basin between 1997 and 2006 as a linked system of land-ocean-atmosphere C exchange. The analysis estimates that terrestrial areas of the Arctic Basin lost 62.9 Tg C yr-1 and that the Arctic Ocean gained 94.1 Tg C yr-1. Arctic lands and oceans were a net CO2 sink of 108.9 Tg C yr-1, which is within the range of uncertainty in estimates from atmospheric inversions. Although both lands and oceans of the Arctic were estimated to be CO2 sinks, the land sink diminished in strength because of increased fire disturbance compared to previous decades, while the ocean sink increased in strength because of increased biological pump activity associated with reduced sea ice cover. Terrestrial areas of the Arctic were a net source of 41.5 Tg CH4 yr-1 that increased by 0.6 Tg CH4 yr-1 during the decade of analysis, a magnitude that is comparable with an atmospheric inversion of CH4. Because the radiative forcing of the estimated CH4 emissions is much greater than the CO2 sink, the analysis suggests that the Arctic Basin is a substantial net source of green house gas forcing to the climate system.

  16. Air kerma strength characterization of a GZP6 Cobalt-60 brachytherapy source

    PubMed Central

    Toossi, Mohammad Taghi Bahreyni; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Taheri, Mojtaba; Layegh, Mohsen; Makhdoumi, Yasha; Meigooni, Ali Soleimani

    2010-01-01

    Background Task group number 40 (TG-40) of the American Association of Physicists in Medicine (AAPM) has recommended calibration of any brachytherapy source before its clinical use. GZP6 afterloading brachytherapy unit is a 60Co high dose rate (HDR) system recently being used in some of the Iranian radiotherapy centers. Aim In this study air kerma strength (AKS) of 60Co source number three of this unit was estimated by Monte Carlo simulation and in air measurements. Materials and methods Simulation was performed by employing the MCNP-4C Monte Carlo code. Self-absorption of the source core and its capsule were taken into account when calculating air kerma strength. In-air measurements were performed according to the multiple distance method; where a specially designed jig and a 0.6 cm3 Farmer type ionization chamber were used for the measurements. Monte Carlo simulation, in air measurement and GZP6 treatment planning results were compared for primary air kerma strength (as for November 8th 2005). Results Monte Carlo calculated and in air measured air kerma strength were respectively equal to 17240.01 μGym2 h−1 and 16991.83 μGym2 h−1. The value provided by the GZP6 treatment planning system (TPS) was “15355 μGym2 h−1”. Conclusion The calculated and measured AKS values are in good agreement. Calculated-TPS and measured-TPS AKS values are also in agreement within the uncertainties related to our calculation, measurements and those certified by the GZP6 manufacturer. Considering the uncertainties, the TPS value for AKS is validated by our calculations and measurements, however, it is incorporated with a large uncertainty. PMID:24376948

  17. Integrating Wireless Networking for Radiation Detection

    NASA Astrophysics Data System (ADS)

    Board, Jeremy; Barzilov, Alexander; Womble, Phillip; Paschal, Jon

    2006-10-01

    As wireless networking becomes more available, new applications are being developed for this technology. Our group has been studying the advantages of wireless networks of radiation detectors. With the prevalence of the IEEE 802.11 standard (``WiFi''), we have developed a wireless detector unit which is comprised of a 5 cm x 5 cm NaI(Tl) detector, amplifier and data acquisition electronics, and a WiFi transceiver. A server may communicate with the detector unit using a TCP/IP network connected to a WiFi access point. Special software on the server will perform radioactive isotope determination and estimate dose-rates. We are developing an enhanced version of the software which utilizes the receiver signal strength index (RSSI) to estimate source strengths and to create maps of radiation intensity.

  18. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    NASA Astrophysics Data System (ADS)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-07-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.

  19. Gestational age estimation on United States livebirth certificates: a historical overview.

    PubMed

    Wier, Megan L; Pearl, Michelle; Kharrazi, Martin

    2007-09-01

    Gestational age on the birth certificate is the most common source of population-based gestational age data that informs public health policy and practice in the US. Last menstrual period is one of the oldest methods of gestational age estimation and has been on the US Standard Certificate of Live Birth since 1968. The 'clinical estimate of gestation', added to the standard certificate in 1989 to address missing or erroneous last menstrual period data, was replaced by the 'obstetric estimate of gestation' on the 2003 revision, which specifically precludes neonatal assessments. We discuss the strengths and weaknesses of these measures, potential research implications and challenges accompanying the transition to the obstetric estimate.

  20. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  1. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  2. Bayesian Immunological Model Development from the Literature: Example Investigation of Recent Thymic Emigrants†

    PubMed Central

    Holmes, Tyson H.; Lewis, David B.

    2014-01-01

    Bayesian estimation techniques offer a systematic and quantitative approach for synthesizing data drawn from the literature to model immunological systems. As detailed here, the practitioner begins with a theoretical model and then sequentially draws information from source data sets and/or published findings to inform estimation of model parameters. Options are available to weigh these various sources of information differentially per objective measures of their corresponding scientific strengths. This approach is illustrated in depth through a carefully worked example for a model of decline in T-cell receptor excision circle content of peripheral T cells during development and aging. Estimates from this model indicate that 21 years of age is plausible for the developmental timing of mean age of onset of decline in T-cell receptor excision circle content of peripheral T cells. PMID:25179832

  3. Estimation of muscle strength during motion recognition using multichannel surface EMG signals.

    PubMed

    Nagata, Kentaro; Nakano, Takemi; Magatani, Kazushige; Yamada, Masafumi

    2008-01-01

    The use of kinesiological electromyography is established as an evaluation tool for various kinds of applied research, and surface electromyogram (SEMG) has been widely used as a control source for human interfaces such as in a myoelectric prosthetic hand (we call them 'SEMG interfaces'). It is desirable to be able to control the SEMG interfaces with the same feeling as body movement. The existing SEMG interface mainly focuses on how to achieve accurate recognition of the intended movement. However, detecting muscular strength and reduced number of electrodes are also an important factor in controlling them. Therefore, our objective in this study is the development of and the estimation method for muscular strength that maintains the accuracy of hand motion recognition to reflect the result of measured power in a controlled object. Although the muscular strength can be evaluated by various methods, in this study a grasp force index was applied to evaluate the muscular strength. In order to achieve our objective, we directed our attention to measuring all valuable information for SEMG. This work proposes an application method of two simple linear models, and the selection method of an optimal electrode configuration to use them effectively. Our system required four SEMG measurement electrodes in which locations differed for every subject depending on the individual's characteristics, and those were selected from a 96ch multi electrode using the Monte Carlo method. From the experimental results, the performance in six normal subjects indicated that the recognition rate of four motions were perfect and the grasp force estimated result fit well with the actual measurement result.

  4. Situational Strength Cues from Social Sources at Work: Relative Importance and Mediated Effects

    PubMed Central

    Alaybek, Balca; Dalal, Reeshad S.; Sheng, Zitong; Morris, Alexander G.; Tomassetti, Alan J.; Holland, Samantha J.

    2017-01-01

    Situational strength is considered one of the most important situational forces at work because it can attenuate the personality–performance relationship. Although organizational scholars have studied the consequences of situational strength, they have paid little attention to its antecedents. To address this gap, the current study focused on situational strength cues from different social sources as antecedents of overall situational strength at work. Specifically, we examined how employees combine situational strength cues emanating from three social sources (i.e., coworkers, the immediate supervisor, and top management). Based on field theory, we hypothesized that the effect of situational strength from coworkers and immediate supervisors (i.e., proximal sources of situational strength) on employees' perceptions of overall situational strength on the job would be greater than the effect of situational strength from the top management (i.e., the distal source of situational strength). We also hypothesized that the effect of situational strength from the distal source would be mediated by the effects of situational strength from the proximal sources. Data from 363 full-time employees were collected at two time points with a cross-lagged panel design. The former hypothesis was supported for one of the two situational strength facets studied. The latter hypothesis was fully supported. PMID:28928698

  5. FR II radio galaxies at low frequencies - I. Morphology, magnetic field strength and energetics.

    PubMed

    Harwood, Jeremy J; Croston, Judith H; Intema, Huib T; Stewart, Adam J; Ineson, Judith; Hardcastle, Martin J; Godfrey, Leith; Best, Philip; Brienza, Marisa; Heesen, Volker; Mahony, Elizabeth K; Morganti, Raffaella; Murgia, Matteo; Orrú, Emanuela; Röttgering, Huub; Shulevski, Aleksandar; Wise, Michael W

    2016-06-01

    Due to their steep spectra, low-frequency observations of Fanaroff-Riley type II (FR II) radio galaxies potentially provide key insights in to the morphology, energetics and spectrum of these powerful radio sources. However, limitations imposed by the previous generation of radio interferometers at metre wavelengths have meant that this region of parameter space remains largely unexplored. In this paper, the first in a series examining FR IIs at low frequencies, we use LOFAR (LOw Frequency ARray) observations between 50 and 160 MHz, along with complementary archival radio and X-ray data, to explore the properties of two FR II sources, 3C 452 and 3C 223. We find that the morphology of 3C 452 is that of a standard FR II rather than of a double-double radio galaxy as had previously been suggested, with no remnant emission being observed beyond the active lobes. We find that the low-frequency integrated spectra of both sources are much steeper than expected based on traditional assumptions and, using synchrotron/inverse-Compton model fitting, show that the total energy content of the lobes is greater than previous estimates by a factor of around 5 for 3C 452 and 2 for 3C 223. We go on to discuss possible causes of these steeper-than-expected spectra and provide revised estimates of the internal pressures and magnetic field strengths for the intrinsically steep case. We find that the ratio between the equipartition magnetic field strengths and those derived through synchrotron/inverse-Compton model fitting remains consistent with previous findings and show that the observed departure from equipartition may in some cases provide a solution to the spectral versus dynamical age disparity.

  6. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  7. Magnetic fields in Supernova Remnants and Pulsar-Wind Nebulae: Deductions from X-ray Observations

    NASA Astrophysics Data System (ADS)

    Reynolds, S. P.

    2016-06-01

    Magnetic field strengths B in synchrotron sources are notoriously difficult to measure. Simple arguments such as equipartition of energy can give values for which the total energy is a minimum, but there is no guarantee that Nature obeys it, or even if so, what particle population (just electrons? electrons plus ions?) should have an energy density comparable to that in magnetic field. However, the operation of synchrotron losses can provide additional information, if those losses are manifested in the synchrotron spectra as steepenings of the spectral-energy distribution above some characteristic frequency often called a "break" (though it is more typically a gradual curvature). A source of known age, if it has been accelerating particles continuously, will have such a break above the energy at which particle radiative lifetimes equal the source age, and this can give B. However, in spatially resolved sources such as supernova remnants (SNRs) and pulsar-wind nebulae (PWNe), systematic advection of particles, if at a known rate, gives a second measure of particle age to compare with radiative lifetimes. In most young SNRs, synchrotron X-rays make a contribution to the X-ray spectrum, and are usually found in thin rims at the remnant edges. If the rims are thin in the radial direction due to electron energy losses, a magnetic-field strength can be estimated. I present recent modeling of this process, along with models in which rims are thin due to decay of magnetic turbulence, and apply them to the remnants of SN 1006 and Tycho. In PWNe, outflows of relativistic plasma behind the pulsar wind termination shock are likely quite inhomogeneous, so magnetic-field estimates based on source lifetimes and assuming spatial uniformity can give misleading values for B. I shall discuss inhomogeneous PWN models and the effects they can have on B estimates.

  8. Significance of shock structure on supersonic jet mixing noise of axisymmetric nozzles

    NASA Astrophysics Data System (ADS)

    Kim, Chan M.; Krejsa, Eugene A.; Khavaran, Abbas

    1994-09-01

    One of the key technical elements in NASA's high speed research program is reducing the noise level to meet the federal noise regulation. The dominant noise source is associated with the supersonic jet discharged from the engine exhaust system. Whereas the turbulence mixing is largely responsible for the generation of the jet noise, a broadband shock-associated noise is also generated when the nozzle operates at conditions other than its design. For both mixing and shock noise components, because the source of the noise is embedded in the jet plume, one can expect that jet noise can be predicted from the jet flowfield computation. Mani et al. developed a unified aerodynamic/acoustic prediction scheme by applying an extension of Reichardt's aerodynamic model to compute turbulent shear stresses which are utilized in estimating the strength of the noise source. Although this method produces a fast and practical estimate of the jet noise, a modification by Khavaran et al. has led to an improvement in aerodynamic solution. The most notable feature in this work is that Reichardt's model is replaced with the computational fluid dynamics (CFD) solution of Reynolds-averaged Navier-Stokes equations. The major advantage of this work is that the essential, noise-related flow quantities such as turbulence intensity and shock strength can be better predicted. The predictions were limited to a shock-free design condition and the effect of shock structure on the jet mixing noise was not addressed. The present work is aimed at investigating this issue. Under imperfectly expanded conditions the existence of the shock cell structure and its interaction with the convecting turbulence structure may not only generate a broadband shock-associated noise but also change the turbulence structure, and thus the strength of the mixing noise source. Failure in capturing shock structures properly could lead to incorrect aeroacoustic predictions.

  9. Significance of shock structure on supersonic jet mixing noise of axisymmetric nozzles

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Krejsa, Eugene A.; Khavaran, Abbas

    1994-01-01

    One of the key technical elements in NASA's high speed research program is reducing the noise level to meet the federal noise regulation. The dominant noise source is associated with the supersonic jet discharged from the engine exhaust system. Whereas the turbulence mixing is largely responsible for the generation of the jet noise, a broadband shock-associated noise is also generated when the nozzle operates at conditions other than its design. For both mixing and shock noise components, because the source of the noise is embedded in the jet plume, one can expect that jet noise can be predicted from the jet flowfield computation. Mani et al. developed a unified aerodynamic/acoustic prediction scheme by applying an extension of Reichardt's aerodynamic model to compute turbulent shear stresses which are utilized in estimating the strength of the noise source. Although this method produces a fast and practical estimate of the jet noise, a modification by Khavaran et al. has led to an improvement in aerodynamic solution. The most notable feature in this work is that Reichardt's model is replaced with the computational fluid dynamics (CFD) solution of Reynolds-averaged Navier-Stokes equations. The major advantage of this work is that the essential, noise-related flow quantities such as turbulence intensity and shock strength can be better predicted. The predictions were limited to a shock-free design condition and the effect of shock structure on the jet mixing noise was not addressed. The present work is aimed at investigating this issue. Under imperfectly expanded conditions the existence of the shock cell structure and its interaction with the convecting turbulence structure may not only generate a broadband shock-associated noise but also change the turbulence structure, and thus the strength of the mixing noise source. Failure in capturing shock structures properly could lead to incorrect aeroacoustic predictions.

  10. Observation of the 63 micron (0 1) emission line in the Orion and Omega Nebulae

    NASA Technical Reports Server (NTRS)

    Melnick, G.; Gull, G. E.; Harwit, M.

    1978-01-01

    The 63 micron fine structure transition P4 : 3Pl yields 3P2 for neutral atomic oxygen was obtained during a series of flights at an altitude of approximately 13.7 km. In the Orion Nebula (M42), the observed line strength was 8 x 10 to the minus 15 power watt cm/2 which is estimated to be approximately 0.3 o/o of the energy radiated at all wavelengths. For the Omega Nebulae (M17), the line strength was 2.4 x 10 to the minus 15 power watt cm/2, and the fraction of the total radiated power was slightly higher. These figures refer to a 4' x 6' field of view centered on the peak for infrared emission from each source. The uncertainty in the line strength is approximately 50% and is caused by variable water vapor absorption along the flight path of the airplane. The line position estimate is 63.2 micron (+0.1, -0.2) micron. The prime uncertainty is due to the uncertain position of the (0 I) emitting regions in the field of view.

  11. Detection, localization and classification of multiple dipole-like magnetic sources using magnetic gradient tensor data

    NASA Astrophysics Data System (ADS)

    Gang, Yin; Yingtang, Zhang; Hongbo, Fan; Zhining, Li; Guoquan, Ren

    2016-05-01

    We have developed a method for automatic detection, localization and classification (DLC) of multiple dipole sources using magnetic gradient tensor data. First, we define modified tilt angles to estimate the approximate horizontal locations of the multiple dipole-like magnetic sources simultaneously and detect the number of magnetic sources using a fixed threshold. Secondly, based on the isotropy of the normalized source strength (NSS) response of a dipole, we obtain accurate horizontal locations of the dipoles. Then the vertical locations are calculated using magnitude magnetic transforms of magnetic gradient tensor data. Finally, we invert for the magnetic moments of the sources using the measured magnetic gradient tensor data and forward model. Synthetic and field data sets demonstrate effectiveness and practicality of the proposed method.

  12. Observations of SO in dark and molecular clouds

    NASA Technical Reports Server (NTRS)

    Rydbeck, O. E. H.; Hjalmarson, A.; Rydbeck, G.; Ellder, J.; Kollberg, E.; Irvine, W. M.

    1980-01-01

    The 1(0)-0(1) transition of SO at 30 GHz has been observed in several sources, including the first detection of sulfur monoxide in cold dark clouds without apparent internal energy sources. The SO transition appears to be an excellent tracer of structure in dark clouds, and the data support suggestions that self-absorption is important in determining emission profiles in such regions for large line-strength transitions. Column densities estimated from a comparison of the results for the two isotopic species indicate a high fractional abundance of SO in dark clouds.

  13. Low-frequency Target Strength and Abundance of Shoaling Atlantic Herring (Clupea harengus) in the Gulf of Maine during the Ocean Acoustic Waveguide Remote Sensing 2006 Experiment

    DTIC Science & Technology

    2010-01-01

    the northern flank of Georges Bank from east to west. As a result, annual stock estimates may be highly aliased in both time and space. One of the...transmitted signals from the source array for transmission loss and source level calibrations. Two calibrated acoustic targets made of air- filled rubber...region to the north is comprised of over 70106 individuals. Concurrent localized imaging of fish aggregations at OAWRS- directed locations was

  14. In-duct identification of a rotating sound source with high spatial resolution

    NASA Astrophysics Data System (ADS)

    Heo, Yong-Ho; Ih, Jeong-Guon; Bodén, Hans

    2015-11-01

    To understand and reduce the flow noise generation from in-duct fluid machines, it is necessary to identify the acoustic source characteristics precisely. In this work, a source identification technique, which can identify the strengths and positions of the major sound radiators in the source plane, is studied for an in-duct rotating source. A linear acoustic theory including the effects of evanescent modes and source rotation is formulated based on the modal summation method, which is the underlying theory for the inverse source reconstruction. A validation experiment is conducted on a duct system excited by a loudspeaker in static and rotating conditions, with two different speeds, in the absence of flow. Due to the source rotation, the measured pressure spectra reveal the Doppler effect, and the amount of frequency shift corresponds to the multiplication of the circumferential mode order and the rotation speed. Amplitudes of participating modes are estimated at the shifted frequencies in the stationary reference frame, and the modal amplitude set including the effect of source rotation is collected to investigate the source behavior in the rotating reference frame. By using the estimated modal amplitudes, the near-field pressure is re-calculated and compared with the measured pressure. The obtained maximum relative error is about -25 and -10 dB for rotation speeds at 300 and 600 rev/min, respectively. The spatial distribution of acoustic source parameters is restored from the estimated modal amplitude set. The result clearly shows that the position and magnitude of the main sound source can be identified with high spatial resolution in the rotating reference frame.

  15. Optimized spectroscopic scheme for enhanced precision CO measurements with applications to urban source attribution

    NASA Astrophysics Data System (ADS)

    Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.

    2014-12-01

    Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.

  16. Inventory of Data Sources for Estimating Health Care Costs in the United States

    PubMed Central

    Lund, Jennifer L.; Yabroff, K. Robin; Ibuka, Yoko; Russell, Louise B.; Barnett, Paul G.; Lipscomb, Joseph; Lawrence, William F.; Brown, Martin L.

    2011-01-01

    Objective To develop an inventory of data sources for estimating health care costs in the United States and provide information to aid researchers in identifying appropriate data sources for their specific research questions. Methods We identified data sources for estimating health care costs using 3 approaches: (1) a review of the 18 articles included in this supplement, (2) an evaluation of websites of federal government agencies, non profit foundations, and related societies that support health care research or provide health care services, and (3) a systematic review of the recently published literature. Descriptive information was abstracted from each data source, including sponsor, website, lowest level of data aggregation, type of data source, population included, cross-sectional or longitudinal data capture, source of diagnosis information, and cost of obtaining the data source. Details about the cost elements available in each data source were also abstracted. Results We identified 88 data sources that can be used to estimate health care costs in the United States. Most data sources were sponsored by government agencies, national or nationally representative, and cross-sectional. About 40% were surveys, followed by administrative or linked administrative data, fee or cost schedules, discharges, and other types of data. Diagnosis information was available in most data sources through procedure or diagnosis codes, self-report, registry, or chart review. Cost elements included inpatient hospitalizations (42.0%), physician and other outpatient services (45.5%), outpatient pharmacy or laboratory (28.4%), out-of-pocket (22.7%), patient time and other direct nonmedical costs (35.2%), and wages (13.6%). About half were freely available for downloading or available for a nominal fee, and the cost of obtaining the remaining data sources varied by the scope of the project. Conclusions Available data sources vary in population included, type of data source, scope, and accessibility, and have different strengths and weaknesses for specific research questions. PMID:19536009

  17. Evaluation of Long-term Performance of Enhanced Anaerobic Source Zone Bioremediation using mass flux

    NASA Astrophysics Data System (ADS)

    Haluska, A.; Cho, J.; Hatzinger, P.; Annable, M. D.

    2017-12-01

    Chlorinated ethene DNAPL source zones in groundwater act as potential long term sources of contamination as they dissolve yielding concentrations well above MCLs, posing an on-going public health risk. Enhanced bioremediation has been applied to treat many source zones with significant promise, but long-term sustainability of this technology has not been thoroughly assessed. This study evaluated the long-term effectiveness of enhanced anaerobic source zone bioremediation at chloroethene contaminated sites to determine if the treatment prevented contaminant rebound and removed NAPL from the source zone. Long-term performance was evaluated based on achieving MCL-based contaminant mass fluxes in parent compound concentrations during different monitoring periods. Groundwater concertation versus time data was compiled for 6-sites and post-remedial contaminant mass flux data was then measured using passive flux meters at wells both within and down-gradient of the source zone. Post-remedial mass flux data was then combined with pre-remedial water quality data to estimate pre-remedial mass flux. This information was used to characterize a DNAPL dissolution source strength function, such as the Power Law Model and the Equilibrium Stream tube model. The six-sites characterized for this study were (1) Former Charleston Air Force Base, Charleston, SC; (2) Dover Air Force Base, Dover, DE; (3) Treasure Island Naval Station, San Francisco, CA; (4) Former Raritan Arsenal, Edison, NJ; (5) Naval Air Station, Jacksonville, FL; and, (6) Former Naval Air Station, Alameda, CA. Contaminant mass fluxes decreased for all the sites by the end of the post-treatment monitoring period and rebound was limited within the source zone. Post remedial source strength function estimates suggest that decreases in contaminant mass flux will continue to occur at these sites, but a mass flux based on MCL levels may never be exceeded. Thus, site clean-up goals should be evaluated as order-of-magnitude reductions. Additionally, sites may require monitoring for a minimum of 5-years in order to sufficiently evaluate remedial performance. The study shows that enhanced anaerobic source zone bioremediation contributed to a modest reduction of source zone contaminant mass discharge and appears to have mitigated rebound of chlorinated ethenes.

  18. Oscillator strengths of the Si II 181 nanometer resonance multiplet

    NASA Technical Reports Server (NTRS)

    Bergeson, S. D.; Lawler, J. E.

    1993-01-01

    We report Si II experimental log (gf)-values of -2.38(4) for the 180.801 nm line, of -2.18(4) for the 181.693 nm line, and of -3.29(5) for the 181.745 nm line, where the number in parentheses is the uncertainty in the last digit. The overall uncertainties (about 10 percent) include the 1 sigma random uncertainty (about 6 percent) and an estimate of the systematic uncertainty. The oscillator strengths are determined by combining branching fractions and radiative lifetimes. The branching fractions are measured using standard spectroradiometry on an optically thin source; the radiative lifetimes are measured using time-resolved laser-induced fluorescence.

  19. The Educational Consequences of Teen Childbearing

    PubMed Central

    Kane, Jennifer B.; Morgan, S. Philip; Harris, Kathleen Mullan; Guilkey, David K.

    2013-01-01

    A huge literature shows that teen mothers face a variety of detriments across the life course, including truncated educational attainment. To what extent is this association causal? The estimated effects of teen motherhood on schooling vary widely, ranging from no discernible difference to 2.6 fewer years among teen mothers. The magnitude of educational consequences is therefore uncertain, despite voluminous policy and prevention efforts that rest on the assumption of a negative and presumably causal effect. This study adjudicates between two potential sources of inconsistency in the literature—methodological differences or cohort differences—by using a single, high-quality data source: namely, The National Longitudinal Study of Adolescent Health. We replicate analyses across four different statistical strategies: ordinary least squares regression; propensity score matching; and parametric and semiparametric maximum likelihood estimation. Results demonstrate educational consequences of teen childbearing, with estimated effects between 0.7 and 1.9 fewer years of schooling among teen mothers. We select our preferred estimate (0.7), derived from semiparametric maximum likelihood estimation, on the basis of weighing the strengths and limitations of each approach. Based on the range of estimated effects observed in our study, we speculate that variable statistical methods are the likely source of inconsistency in the past. We conclude by discussing implications for future research and policy, and recommend that future studies employ a similar multimethod approach to evaluate findings. PMID:24078155

  20. Sources of interference in item and associative recognition memory.

    PubMed

    Osth, Adam F; Dennis, Simon

    2015-04-01

    A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). We present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to 10 recognition memory datasets that use manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired before the learning episode. (c) 2015 APA, all rights reserved).

  1. Using Instrumental Variable (IV) Tests to Evaluate Model Specification in Latent Variable Structural Equation Models*

    PubMed Central

    Kirby, James B.; Bollen, Kenneth A.

    2009-01-01

    Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054

  2. Millisecond radio spikes from the dwarf M flare star AD Leonis

    NASA Technical Reports Server (NTRS)

    Lang, K. R.; Willson, R. F.

    1986-01-01

    Arecibo radio observations of millisec bursts of radio signals at 1415 MHz from AD Leonis are reported. The observed burst had an ellipticity of 0.95, 50-100 percent circular polarization, and a flux density maximum of 30 mJy. The 50 sec burst featured five quasi-periodic oscillations with a mean periodicity of about 3.2 sec. A second, less intense burst that occurred 20 sec later was 100 percent circularly polarized. The area emitting the bursts covered an estimated 0.005 of the radius of AD Leonis and had an electron density of 6 billion/cu cm and a longitudinal magnetic field strength of 250 gauss, if the source was an electron-cyclotron maser. A coherent plasma source would require, for the first harmonic, an electron density of 20 billion/cu cm and a magnetic field much less than 500 gauss. A second harmonic of the plasma frequency would require an electron density of 6 billion/cu cm and a field strength much less than 250 gauss. The possibility that the source was periodic oscillations in coronal loops is discussed.

  3. Improvement of isometric dorsiflexion protocol for assessment of tibialis anterior muscle strength☆

    PubMed Central

    Siddiqi, Ariba; Arjunan, Sridhar P.; Kumar, Dinesh

    2015-01-01

    It is important to accurately estimate the electromyogram (EMG)/force relationship of triceps surae (TS) muscle for detecting strength deficit of tibalis anterior (TA) muscle. In literature, the protocol for recording EMG and force of dorsiflexion have been described, and the necessity for immobilizing the ankle has been explained. However, there is a significant variability of the results among researchers even though they report the fixation of the ankle. We have determined that toe extension can cause significant variation in the dorsiflexion force and EMG of TS and this can occur despite following the current guidelines which require immobilizing the ankle. The results also show that there was a large increase in the variability of the force and the RMS of EMG of TS when the toes were not strapped compared with when they were strapped. Thus, with the current guidelines, where there are no instructions regarding the necessity of strapping the toes, the EMG/force relationship of TS could be incorrect and give an inaccurate assessment of the dorsiflexor TA strength. In summary, • Current methodology to estimate the dorsiflexor TA strength with respect to the TS activity, emphasizing on ankle immobilization is insufficient to prevent large variability in the measurements. • Toe extension during dorsiflexion was found to be one source of variability in estimating the TA strength. • It is recommended that guidelines for recording force and EMG from TA and TS muscles should require the strapping of the toes along with the need for immobilizing the ankle. PMID:26150978

  4. Airborne measurements of biomass burning products over Africa

    NASA Technical Reports Server (NTRS)

    Helas, Guenter; Lobert, Juergen; Goldammer, Johann; Andreae, Meinrat O.; Lacaux, J. P.; Delmas, R.

    1994-01-01

    Ozone has been observed in elevated concentrations by satellites over hitherto believed 'background' areas. There is meteorological evidence that these ozone 'plumes' found over the Atlantic ocean originate from biomass fires on the African continent. Therefore we have investigated ozone and assumed precursor compounds over African regions. The measurements revealed large photosmog layers in altitudes between 1.5 and 4 km. Here we will focus on some results of ozone mixing ratios obtained during the DECAFE 91/FOS experiment and estimate the relevance of biomass burning as a source by comparing the strength of this source to stratospheric input.

  5. Mathematical Fluid Dynamics of Store and Stage Separation

    DTIC Science & Technology

    2005-05-01

    coordinates r = stretched inner radius S, (x) = effective source strength Re, = transition Reynolds number t = time r = reflection coefficient T = temperature...wave drag due to lift integral has the same form as that due to thickness, the source strength of the equivalent body depends on streamwise derivatives...revolution in which the source strength S, (x) is proportional to the x rate of change of cross sectional area, the source strength depends on the streamwise

  6. Anthropometric Source Book. Volume 3: Annotated Bibliography of Anthropometry

    DTIC Science & Technology

    1978-07-01

    on Isometric Strength and Endurance, Blood Flow, and the Blood Pressure and Heart Rate Response to Isometric Exercise . TR 75 0086, Air Force Office... somatotype . In this report the subgroup statistics were combined to yield summary statistics arranged into more convenient tabulations for the...devices and tech- niques developed under the auspices of NASA for use in measuring and estim- ating human responses under zero-gravity conditions

  7. How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach

    DTIC Science & Technology

    2009-12-01

    Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of

  8. The 26 December 2004 tsunami source estimated from satellite radar altimetry and seismic waves

    NASA Technical Reports Server (NTRS)

    Song, Tony Y.; Ji, Chen; Fu, L. -L.; Zlotnicki, Victor; Shum, C. K.; Yi, Yuchan; Hjorleifsdottir, Vala

    2005-01-01

    The 26 December 2004 Indian Ocean tsunami was the first earthquake tsunami of its magnitude to occur since the advent of both digital seismometry and satellite radar altimetry. Both have independently recorded the event from different physical aspects. The seismic data has then been used to estimate the earthquake fault parameters, and a three-dimensional ocean-general-circulation-model (OGCM) coupled with the fault information has been used to simulate the satellite-observed tsunami waves. Here we show that these two datasets consistently provide the tsunami source using independent methodologies of seismic waveform inversion and ocean modeling. Cross-examining the two independent results confirms that the slip function is the most important condition controlling the tsunami strength, while the geometry and the rupture velocity of the tectonic plane determine the spatial patterns of the tsunami.

  9. Adaptive Environmental Source Localization and Tracking with Unknown Permittivity and Path Loss Coefficients †

    PubMed Central

    Fidan, Barış; Umay, Ilknur

    2015-01-01

    Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441

  10. Estimates of velocity structure and source depth using multiple P waves from aftershocks of the 1987 Elmore Ranch and Superstition Hills, California, earthquakes

    USGS Publications Warehouse

    Mori, J.

    1991-01-01

    Event record sections, which are constructed by plotting seismograms from many closely spaced earthquakes recorded on a few stations, show multiple free-surface reflections (PP, PPP, PPPP) of the P wave in the Imperial Valley. The relative timing of these arrivals is used to estimate the strength of the P-wave velocity gradient within the upper 5 km of the sediment layer. Consistent with previous studies, a velocity model with a value of 1.8 km/sec at the surface increasing linearly to 5.8 km/sec at a depth of 5.5 km fits the data well. The relative amplitudes of the P and PP arrivals are used to estimate the source depth for the aftershock distributions of the Elmore Ranch and Superstition Hills main shocks. Although the depth determination has large uncertainties, both the Elmore Ranch and Superstition Hills aftershock sequencs appear to have similar depth distribution in the range of 4 to 10 km. -Author

  11. Dose rate estimation around a 60Co gamma-ray irradiation source by means of 115mIn photoactivation.

    PubMed

    Murataka, Ayanori; Endo, Satoru; Kojima, Yasuaki; Shizuma, Kiyoshi

    2010-01-01

    Photoactivation of nuclear isomer (115m)In with a halflife of 4.48 h occurs by (60)Co gamma-ray irradiation. This is because the resonance gamma-ray absorption occurs at 1078 keV level for stable (115)In, and that energy gamma-rays are produced by Compton scattering of (60)Co primary gamma-rays. In this work, photoactivation of (115m)In was applied to estimate the dose rate distribution around a (60)Co irradiation source utilizing a standard dose rate taken by alanine dosimeter. The (115m)In photoactivation was measured at 10 to 160 cm from the (60)Co source. The derived dose rate distribution shows a good agreement with both alanine dosimeter data and Monte Carlo simulation. It is found that angular distribution of the dose rate along a circumference at radius 2.8 cm from the central axis shows +/- 10% periodical variation reflecting the radioactive strength of the source rods, but less periodic distribution at radius 10 and 20 cm. The (115m)In photoactivation along the vertical direction in the central irradiation port strongly depends on the height and radius as indicated by Monte Carlo simulation. It is demonstrated that (115m)In photoactivation is a convenient method to estimate the dose rate distribution around a (60)Co source.

  12. Limitations of quantitative analysis of deep crustal seismic reflection data: Examples from GLIMPCE

    USGS Publications Warehouse

    Lee, Myung W.; Hutchinson, Deborah R.

    1992-01-01

    Amplitude preservation in seismic reflection data can be obtained by a relative true amplitude (RTA) processing technique in which the relative strength of reflection amplitudes is preserved vertically as well as horizontally, after compensating for amplitude distortion by near-surface effects and propagation effects. Quantitative analysis of relative true amplitudes of the Great Lakes International Multidisciplinary Program on Crustal Evolution seismic data is hampered by large uncertainties in estimates of the water bottom reflection coefficient and the vertical amplitude correction and by inadequate noise suppression. Processing techniques such as deconvolution, F-K filtering, and migration significantly change the overall shape of amplitude curves and hence calculation of reflection coefficients and average reflectance. Thus lithological interpretation of deep crustal seismic data based on the absolute value of estimated reflection strength alone is meaningless. The relative strength of individual events, however, is preserved on curves generated at different stages in the processing. We suggest that qualitative comparisons of relative strength, if used carefully, provide a meaningful measure of variations in reflectivity. Simple theoretical models indicate that peg-leg multiples rather than water bottom multiples are the most severe source of noise contamination. These multiples are extremely difficult to remove when the water bottom reflection coefficient is large (>0.6), a condition that exists beneath parts of Lake Superior and most of Lake Huron.

  13. Gluten and Aluminum Content in Synthroid® (Levothyroxine Sodium Tablets).

    PubMed

    Espaillat, Ramon; Jarvis, Michael F; Torkelson, Cory; Sinclair, Brent

    2017-07-01

    Inquiries from healthcare providers and patients about the gluten and aluminum content of Synthroid ® (levothyroxine sodium tablets) have increased. The objective of this study was to measure and evaluate the gluten content of the raw materials used in the manufacturing of Synthroid. Additionally, this study determined the aluminum content in different strengths of Synthroid tablets by estimating the amount of aluminum in the raw materials used in the manufacturing of Synthroid. Gluten levels of three lots of the active pharmaceutical ingredient (API) and one lot of each excipient from different vendors were examined. The ingredients in all current Synthroid formulations (strengths) were evaluated for their quantity of aluminum. Gluten concentrations were below the lowest limit of detection (<3.0 ppm) for all tested lots of the API and excipients of Synthroid tablets. Aluminum content varied across tablet strengths (range 19-137 µg/tablet). Gluten levels of the API and excipients were found to be below the lowest level of detection and are considered gluten-free based on the US Food and Drug Administration (FDA) definition for food products. Across the various tablet strengths of Synthroid, the maximum aluminum levels were well below the FDA-determined minimal risk level for chronic oral aluminum exposure (1 mg/kg/day). These data demonstrate that Synthroid tablets are not a source for dietary gluten and are a minimal source of aluminum. AbbVie Inc.

  14. Revised SNAP III Training Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Calvin Elroy; Gonzales, Samuel M.; Myers, William L.

    The Shielded Neutron Assay Probe (SNAP) technique was developed to determine the leakage neutron source strength of a radioactive object. The original system consisted of an Eberline TM Mini-scaler and discrete neutron detector. The system was operated by obtaining the count rate with the Eberline TM instrument, determining the absolute efficiency from a graph, and calculating the neutron source strength by hand. In 2003 the SNAP III, shown in Figure 1, was designed and built. It required the operator to position the SNAP, and then measure the source-to-detector and detectorto- reflector distances. Next the operator entered the distance measurements andmore » started the data acquisition. The SNAP acquired the required count rate and then calculated and displayed the leakage neutron source strength (NSS). The original design of the SNAP III is described in SNAP III Training Manual (ER-TRN-PLN-0258, Rev. 0, January 2004, prepared by William Baird) This report describes some changes that have been made to the SNAP III. One important change is the addition of a LEMO connector to provide neutron detection output pulses for input to the MC-15. This feature is useful in active interrogation with a neutron generator because the MC-15 has the capability to only record data when it is not gated off by a pulse from the neutron generator. This avoids recording of a lot of data during the generator pulses that are not useful. Another change was the replacement of the infrared RS-232 serial communication output by a similar output via a 4-pin LEMO connector. The current document includes a more complete explanation of how to estimate the amount of moderation around a neutron-emitting source.« less

  15. SU-E-T-155: Calibration of Variable Longitudinal Strength 103Pd Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J; Radtke, J; Micka, J

    Purpose: Brachytherapy sources with variable longitudinal strength (VLS) allow for a customized intensity along the length of the source. These have applications in focal brachytherapy treatments of prostate cancer where dose boosting can be achieved through modulation of intra-source strengths. This work focused on development of a calibration methodology for VLS sources based on measurements and Monte Carlo (MC) simulations of five 1 cm {sup 10} {sup 3}Pd sources each containing four regions of variable {sup 103}Pd strength. Methods: The air-kerma strengths of the sources were measured with a variable-aperture free-air chamber (VAFAC). Source strengths were also measured using amore » well chamber. The in-air azimuthal and polar anisotropy of the sources were measured by rotating them in front of a NaI scintillation detector and were calculated with MC simulations. Azimuthal anisotropy results were normalized to their mean intensity values. Polar anisotropy results were normalized to their average transverse axis intensity values. The relative longitudinal strengths of the sources were measured via on-contact irradiations with radiochromic film, and were calculated with MC simulations. Results: The variable {sup 103}Pd loading of the sources was validated by VAFAC and well chamber measurements. Ratios of VAFAC air-kerma strengths and well chamber responses were within ±1.3% for all sources. Azimuthal anisotropy results indicated that ≥95% of the normalized values for all sources were within ±1.7% of the mean values. Polar anisotropy results indicated variations within ±0.3% for a ±7.6° angular region with respect to the source transverse axis. Locations and intensities of the {sup 103}Pd regions were validated by radiochromic film measurements and MC simulations. Conclusion: The calibration methodology developed in this work confirms that the VLS sources investigated have a high level of polar uniformity, and that the strength and longitudinal intensity can be verified experimentally and through MC simulations. {sup 103}Pd sources were provided by CivaTech Oncology, Inc.« less

  16. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  17. The influence of cooling forearm/hand and gender on estimation of handgrip strength.

    PubMed

    Cheng, Chih-Chan; Shih, Yuh-Chuan; Tsai, Yue-Jin; Chi, Chia-Fen

    2014-01-01

    Handgrip strength is essential in manual operations and activities of daily life, but the influence of forearm/hand skin temperature on estimation of handgrip strength is not well documented. Therefore, the present study intended to investigate the effect of local cooling of the forearm/hand on estimation of handgrip strength at various target force levels (TFLs, in percentage of MVC) for both genders. A cold pressor test was used to lower and maintain the hand skin temperature at 14°C for comparison with the uncooled condition. A total of 10 male and 10 female participants were recruited. The results indicated that females had greater absolute estimation deviations. In addition, both genders had greater absolute deviations in the middle range of TFLs. Cooling caused an underestimation of grip strength. Furthermore, a power function is recommended for establishing the relationship between actual and estimated handgrip force. Statement of relevance: Manipulation with grip strength is essential in daily life and the workplace, so it is important to understand the influence of lowering the forearm/hand skin temperature on grip-strength estimation. Females and the middle range of TFL had greater deviations. Cooling the forearm/hand tended to cause underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force. Practitioner Summary: It is important to understand the effect of lowering the forearm/hand skin temperature on grip-strength estimation. A cold pressor was used to cool the hand. The cooling caused underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force. Manipulation with grip strength is essential in daily life and the workplace, so it is important to understand the influence of lowering the forearm/hand skin temperature on grip-strength estimation. Females and the middle range of TFL had greater deviations. Cooling the forearm/hand tended to cause underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force. It is important to understand the effect of lowering the forearm/hand skin temperature on grip-strength estimation. A cold pressor was used to cool the hand. The cooling caused underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force

  18. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  19. Using ARM Observations to Evaluate Climate Model Simulations of Land-Atmosphere Coupling on the U.S. Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Thomas J.; Klein, Stephen A.; Ma, Hsi -Yen

    Several independent measurements of warm-season soil moisture and surface atmospheric variables recorded at the ARM Southern Great Plains (SGP) research facility are used to estimate the terrestrial component of land-atmosphere coupling (LAC) strength and its regional uncertainty. The observations reveal substantial variation in coupling strength, as estimated from three soil moisture measurements at a single site, as well as across six other sites having varied soil and land cover types. The observational estimates then serve as references for evaluating SGP terrestrial coupling strength in the Community Atmospheric Model coupled to the Community Land Model. These coupled model components are operatedmore » in both a free-running mode and in a controlled configuration, where the atmospheric and land states are reinitialized daily, so that they do not drift very far from observations. Although the controlled simulation deviates less from the observed surface climate than its free-running counterpart, the terrestrial LAC in both configurations is much stronger and displays less spatial variability than the SGP observational estimates. Preliminary investigation of vegetation leaf area index (LAI) substituted for soil moisture suggests that the overly strong coupling between model soil moisture and surface atmospheric variables is associated with too much evaporation from bare ground and too little from the vegetation cover. Lastly, these results imply that model surface characteristics such as LAI, as well as the physical parameterizations involved in the coupling of the land and atmospheric components, are likely to be important sources of the problematical LAC behaviors.« less

  20. Using ARM Observations to Evaluate Climate Model Simulations of Land-Atmosphere Coupling on the U.S. Southern Great Plains

    DOE PAGES

    Phillips, Thomas J.; Klein, Stephen A.; Ma, Hsi -Yen; ...

    2017-10-13

    Several independent measurements of warm-season soil moisture and surface atmospheric variables recorded at the ARM Southern Great Plains (SGP) research facility are used to estimate the terrestrial component of land-atmosphere coupling (LAC) strength and its regional uncertainty. The observations reveal substantial variation in coupling strength, as estimated from three soil moisture measurements at a single site, as well as across six other sites having varied soil and land cover types. The observational estimates then serve as references for evaluating SGP terrestrial coupling strength in the Community Atmospheric Model coupled to the Community Land Model. These coupled model components are operatedmore » in both a free-running mode and in a controlled configuration, where the atmospheric and land states are reinitialized daily, so that they do not drift very far from observations. Although the controlled simulation deviates less from the observed surface climate than its free-running counterpart, the terrestrial LAC in both configurations is much stronger and displays less spatial variability than the SGP observational estimates. Preliminary investigation of vegetation leaf area index (LAI) substituted for soil moisture suggests that the overly strong coupling between model soil moisture and surface atmospheric variables is associated with too much evaporation from bare ground and too little from the vegetation cover. Lastly, these results imply that model surface characteristics such as LAI, as well as the physical parameterizations involved in the coupling of the land and atmospheric components, are likely to be important sources of the problematical LAC behaviors.« less

  1. Rapidly locating and characterizing pollutant releases in buildings.

    PubMed

    Sohn, Michael D; Reynolds, Pamela; Singh, Navtej; Gadgil, Ashok J

    2002-12-01

    Releases of airborne contaminants in or near a building can lead to significant human exposures unless prompt response measures are taken. However, possible responses can include conflicting strategies, such as shutting the ventilation system off versus running it in a purge mode or having occupants evacuate versus sheltering in place. The proper choice depends in part on knowing the source locations, the amounts released, and the likely future dispersion routes of the pollutants. We present an approach that estimates this information in real time. It applies Bayesian statistics to interpret measurements of airborne pollutant concentrations from multiple sensors placed in the building and computes best estimates and uncertainties of the release conditions. The algorithm is fast, capable of continuously updating the estimates as measurements stream in from sensors. We demonstrate the approach using a hypothetical pollutant release in a five-room building. Unknowns to the interpretation algorithm include location, duration, and strength of the source, and some building and weather conditions. Two sensor sampling plans and three levels of data quality are examined. Data interpretation in all examples is rapid; however, locating and characterizing the source with high probability depends on the amount and quality of data and the sampling plan.

  2. Sources of strength-training information and strength-training behavior among Japanese older adults.

    PubMed

    Harada, Kazuhiro; Shibata, Ai; Lee, Euna; Oka, Koichiro; Nakamura, Yoshio

    2016-03-01

    The promotion of strength training is now recognized as an important component of public health initiatives for older adults. To develop successful communication strategies to increase strength-training behavior among older adults, the identification of effective communication channels to reach older adults is necessary. This study aimed to identify the information sources about strength training that were associated with strength-training behaviors among Japanese older adults. The participants were 1144 adults (60-74 years old) randomly sampled from the registry of residential addresses. A cross-sectional questionnaire survey was conducted. The independent variables were sources of strength-training information (healthcare providers, friends, families, radio, television, newspapers, newsletters, posters, books, magazines, booklets, the Internet, lectures, other sources), and the dependent variable was regular strength-training behavior. Logistic regression analysis was used to identify potential relationships. After adjusting for demographic factors and all other information sources, strength-training information from healthcare providers, friends, books and the Internet were positively related to regular strength-training behavior. The findings of the present study contribute to a better understanding of strength-training behavior and the means of successful communication directed at increasing strength training among older adults. The results suggest that healthcare providers, friends, books and the Internet are effective methods of communication for increasing strength-training behaviors among older adults. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  4. Reconnaissance Report, Section 205 Chattooga River Trion, Georgia, Chattooga County

    DTIC Science & Technology

    1991-07-01

    magnitude, mb, of 7.5, at a distance of about 118 km, in the New Madrid source zone. The earthquake motions estimated to occur at Barkley from an...4: Liquefaction Susceptibility Evaluation and Post- Earthquake Strength Determination Volume 5: Stability Evaluation of Geotechnical Structures The...contributions from ORN. Mssrs. Ronald E. Wahl of Soil and Rock Mechanics Division, Richard S. Olsen, and Dr. M. E. Hynes of the Earthquake Engineering and

  5. Quantifying the source-sink balance and carbohydrate content in three tomato cultivars.

    PubMed

    Li, Tao; Heuvelink, Ep; Marcelis, Leo F M

    2015-01-01

    Supplementary lighting is frequently applied in the winter season for crop production in greenhouses. The effect of supplementary lighting on plant growth depends on the balance between assimilate production in source leaves and the overall capacity of the plants to use assimilates. This study aims at quantifying the source-sink balance and carbohydrate content of three tomato cultivars differing in fruit size, and to investigate to what extent the source/sink ratio correlates with the potential fruit size. Cultivars Komeet (large size), Capricia (medium size), and Sunstream (small size, cherry tomato) were grown from 16 August to 21 November, at similar crop management as in commercial practice. Supplementary lighting (High Pressure Sodium lamps, photosynthetic active radiation at 1 m below lamps was 162 μmol photons m(-2) s(-1); maximum 10 h per day depending on solar irradiance level) was applied from 19 September onward. Source strength was estimated from total plant growth rate using periodic destructive plant harvests in combination with the crop growth model TOMSIM. Sink strength was estimated from potential fruit growth rate which was determined from non-destructively measuring the fruit growth rate at non-limiting assimilate supply, growing only one fruit on each truss. Carbohydrate content in leaves and stems were periodically determined. During the early growth stage, 'Komeet' and 'Capricia' showed sink limitation and 'Sunstream' was close to sink limitation. During this stage reproductive organs had hardly formed or were still small and natural irradiance was high (early September) compared to winter months. Subsequently, during the fully fruiting stage all three cultivars were strongly source-limited as indicated by the low source/sink ratio (average source/sink ratio from 50 days after planting onward was 0.17, 0.22, and 0.33 for 'Komeet,' 'Capricia,' and 'Sunstream,' respectively). This was further confirmed by the fact that pruning half of the fruits hardly influenced net leaf photosynthesis rates. Carbohydrate content in leaves and stems increased linearly with the source/sink ratio. We conclude that during the early growth stage under high irradiance, tomato plants are sink-limited and that the level of sink limitation differs between cultivars but it is not correlated with their potential fruit size. During the fully fruiting stage tomato plants are source-limited and the extent of source limitation of a cultivar is positively correlated with its potential fruit size.

  6. Mapping of electrical potential distribution with charged particle beams. [using an X-ray source

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.

    1979-01-01

    Potentials were measured using a beam of soft X-rays in air at 2 x 10 to the -5 power Torr. Ions were detected by a continuous-dynode electron multiplier after they passed through a retarding field. Ultimate resolution depends upon the diameter of the X-ray beam which was 3 mm. When the fields in the region of interest were such to disperse the ions, only a small fraction were detected and the method of measurement was not very reliable. Yet reasonable data could be collected if the ions traveled in parallel paths toward the detector. Development should concentrate on increasing the aperture of the detector from the pinhole which was used to something measured in centimeters. Also increasing the strength of the source would provide a stronger signal and more reliable data. Measurements were made at an estimated ion current to 10 to the -15 power A from a 10 cm length of the X-ray beam, this current being several orders of magnitude below what would have a perturbing effect on the region to be measured. Consequently, the source strength can be increased and prospects for this method of measurement are good.

  7. A study of the sensitivity of an imaging telescope (GRITS) for high energy gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Yearian, Mason R.

    1990-01-01

    When a gamma-ray telescope is placed in Earth orbit, it is bombarded by a flux of cosmic protons much greater than the flux of interesting gammas. These protons can interact in the telescope's thermal shielding to produce detectable gamma rays, most of which are vetoed. Since the proton flux is so high, the unvetoed gamma rays constitute a significant background relative to some weak sources. This background increases the observing time required to pinpoint some sources and entirely obscures other sources. Although recent telescopes have been designed to minimize this background, its strength and spectral characteristics were not previously calculated in detail. Monte Carlo calculations are presented which characterize the strength, spectrum and other features of the cosmic proton background using FLUKA, a hadronic cascade program. Several gamma-ray telescopes, including SAS-2, EGRET and the Gamma Ray Imaging Telescope System (GRITS), are analyzed, and their proton-induced backgrounds are characterized. In all cases, the backgrounds are either shown to be low relative to interesting signals or suggestions are made which would reduce the background sufficiently to leave the telescope unimpaired. In addition, several limiting cases are examined for comparison to previous estimates and calibration measurements.

  8. Using Self-reports or Claims to Assess Disease Prevalence: It's Complicated.

    PubMed

    St Clair, Patricia; Gaudette, Étienne; Zhao, Henu; Tysinger, Bryan; Seyedin, Roxanna; Goldman, Dana P

    2017-08-01

    Two common ways of measuring disease prevalence include: (1) using self-reported disease diagnosis from survey responses; and (2) using disease-specific diagnosis codes found in administrative data. Because they do not suffer from self-report biases, claims are often assumed to be more objective. However, it is not clear that claims always produce better prevalence estimates. Conduct an assessment of discrepancies between self-report and claims-based measures for 2 diseases in the US elderly to investigate definition, selection, and measurement error issues which may help explain divergence between claims and self-report estimates of prevalence. Self-reported data from 3 sources are included: the Health and Retirement Study, the Medicare Current Beneficiary Survey, and the National Health and Nutrition Examination Survey. Claims-based disease measurements are provided from Medicare claims linked to Health and Retirement Study and Medicare Current Beneficiary Survey participants, comprehensive claims data from a 20% random sample of Medicare enrollees, and private health insurance claims from Humana Inc. Prevalence of diagnosed disease in the US elderly are computed and compared across sources. Two medical conditions are considered: diabetes and heart attack. Comparisons of diagnosed diabetes and heart attack prevalence show similar trends by source, but claims differ from self-reports with regard to levels. Selection into insurance plans, disease definitions, and the reference period used by algorithms are identified as sources contributing to differences. Claims and self-reports both have strengths and weaknesses, which researchers need to consider when interpreting estimates of prevalence from these 2 sources.

  9. New insights to the use of ethanol in automotive fuels: a stable isotopic tracer for fossil- and bio-fuel combustion inputs to the atmosphere.

    PubMed

    Giebel, Brian M; Swart, Peter K; Riemer, Daniel D

    2011-08-01

    Ethanol is currently receiving increased attention because of its use as a biofuel or fuel additive and because of its influence on air quality. We used stable isotopic ratio measurements of (13)C/(12)C in ethanol emitted from vehicles and a small group of tropical plants to establish ethanol's δ(13)C end-member signatures. Ethanol emitted in exhaust is distinctly different from that emitted by tropical plants and can serve as a unique stable isotopic tracer for transportation-related inputs to the atmosphere. Ethanol's unique isotopic signature in fuel is related to corn, a C4 plant and the primary source of ethanol in the U.S. We estimated a kinetic isotope effect (KIE) for ethanol's oxidative loss in the atmosphere and used previous assumptions with respect to the fractionation that may occur during wet and dry deposition. A small number of interpretive model calculations were used for source apportionment of ethanol and to understand the associated effects resulting from atmospheric removal. The models incorporated our end-member signatures and ambient measurements of ethanol, known or estimated source strengths and removal magnitudes, and estimated KIEs associated with atmospheric removal processes for ethanol. We compared transportation-related ethanol signatures to those from biogenic sources and used a set of ambient measurements to apportion each source contribution in Miami, Florida-a moderately polluted, but well ventilated urban location.

  10. The 4-Corners methane hotspot: Mapping CH4 plumes at 60km through 1m resolution using space- and airborne spectrometers

    NASA Astrophysics Data System (ADS)

    Frankenberg, C.; Thorpe, A. K.; Hook, S. J.; Green, R. O.; Thompson, D. R.; Kort, E. A.; Hulley, G. C.; Vance, N.; Bue, B. D.; Aubrey, A. D.

    2015-12-01

    The SCIAMACHY instrument onboard the European research satellite ENVISAT detected a large methane hotspot in the 4-Corners area, specifically in New Mexico and Colorado. Total methane emissions in this region were estimated to be on the order of 0.5Tg/yr, presumably related to coal-bed methane exploration. Here, we report on NASA efforts to augment the TOPDOWN campaign intended to enable regional methane source inversions and identify source types in this area. The Jet Propulsion Laboratory was funded to fly two airborne imaging spectrometers, viz. AVIRIS-NG and HyTES. In April 2015, we used both instruments to continuously map about 2000km2 in the 4-Corners area at 1-5m spatial resolution, with special focus on the most enhanced areas as observed from space. During our weeklong campaign, we detected more than 50 isolated and strongly enhanced methane plumes, ranging from coal mine venting shafts and gas processing facilities through individual well-pads, pipeline leaks and outcrop. Results could be immediately shared with ground-based teams and TOPDOWN aircraft so that ground-validation and identification was feasible for a number of sources. We will provide a general overview of the JPL-led mapping campaign efforts and show individual results, derive source strength estimates and discuss how the results fit in with space borne estimates.

  11. Copper(II) binding by dissolved organic matter: Importance of the copper-to-dissolved organic matter ratio and implications for the Biotic Ligand Model

    USGS Publications Warehouse

    Craven, Alison M.; Aiken, George R.; Ryan, Joseph N.

    2012-01-01

    The ratio of copper to dissolved organic matter (DOM) is known to affect the strength of copper binding by DOM, but previous methods to determine the Cu2+–DOM binding strength have generally not measured binding constants over the same Cu:DOM ratios. In this study, we used a competitive ligand exchange–solid-phase extraction (CLE-SPE) method to determine conditional stability constants for Cu2+–DOM binding at pH 6.6 and 0.01 M ionic strength over a range of Cu:DOM ratios that bridge the detection windows of copper-ion-selective electrode and voltammetry measurements. As the Cu:DOM ratio increased from 0.0005 to 0.1 mg of Cu/mg of DOM, the measured conditional binding constant (cKCuDOM) decreased from 1011.5 to 105.6 M–1. A comparison of the binding constants measured by CLE-SPE with those measured by copper-ion-selective electrode and voltammetry demonstrates that the Cu:DOM ratio is an important factor controlling Cu2+–DOM binding strength even for DOM isolates of different types and different sources and for whole water samples. The results were modeled with Visual MINTEQ and compared to results from the biotic ligand model (BLM). The BLM was found to over-estimate Cu2+ at low total copper concentrations and under-estimate Cu2+ at high total copper concentrations.

  12. Characteristics of carbonyls: Concentrations and source strengths for indoor and outdoor residential microenvironments in China

    NASA Astrophysics Data System (ADS)

    Wang, B.; Lee, S. C.; Ho, K. F.

    Indoor and outdoor carbonyl concentrations were measured simultaneously in 12 urban dwellings in Beijing, Shanghai, Guangzhou, and Xi'an, China in summer (from July to September in 2004) and winter (from December 2004 to February 2005). Formaldehyde was the most abundant indoor carbonyls species, while formaldehyde, acetaldehyde and acetone were found to be the most abundant outdoor carbonyls species. The average formaldehyde concentrations in summer indoor air varied widely between cities, ranging from a low of 19.3 μg m -3 in Xi'an to a high of 92.8 μg m -3 in Beijing. The results showed that the dwellings with tobacco smoke, incense burning or poor ventilation had significantly higher indoor concentrations of certain carbonyls. It was noticed that although one half of the dwellings in this study installed with low emission building materials or furniture, the carbonyls levels were still significantly high. It was also noted that in winter both the indoor and outdoor acetone concentrations in two dwellings in Guangzhou were significantly high, which were mainly caused by the usage of acetone as industrial solvent in many paint manufacturing and other industries located around Guangzhou and relatively longer lifetime of acetone for removal by photolysis and OH reaction than other carbonyls species. The indoor carbonyls levels in Chinese dwellings were higher than that in dwellings in the other countries. The levels of indoor and ambient carbonyls showed great seasonal differences. Six carbonyls species were carried out the estimation of indoor source strengths. Formaldehyde had the largest indoor source strength, with an average of 5.25 mg h -1 in summer and 1.98 mg h -1 in winter, respectively. However, propionaldehyde, crotonaldehyde and benzaldehyde had the weakest indoor sources.

  13. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  14. Sources of Life Strengths Appraisal Scale: A Multidimensional Approach to Assessing Older Adults' Perceived Sources of Life Strengths

    PubMed Central

    Fry, Prem S.; Debats, Dominique L.

    2014-01-01

    Both cognitive and psychosocial theories of adult development stress the fundamental role of older adults' appraisals of the diverse sources of cognitive and social-emotional strengths. This study reports the development of a new self-appraisal measure that incorporates key theoretical dimensions of internal and external sources of life strengths, as identified in the gerontological literature. Using a pilot study sample and three other independent samples to examine older adults' appraisals of their sources of life strengths which helped them in their daily functioning and to combat life challenges, adversity, and losses, a psychometric instrument having appropriate reliability and validity properties was developed. A 24-month followup of a randomly selected sample confirmed that the nine-scale appraisal measure (SLSAS) is a promising instrument for appraising older adults' sources of life strengths in dealing with stresses of daily life's functioning and also a robust measure for predicting outcomes of resilience, autonomy, and well-being for this age group. A unique strength of the appraisal instrument is its critically relevant features of brevity, simplicity of language, and ease of administration to frail older adults. Dedicated to the memory of Shanta Khurana whose assistance in the pilot work for the study was invaluable PMID:24772352

  15. Band-limited Green's Functions for Quantitative Evaluation of Acoustic Emission Using the Finite Element Method

    NASA Technical Reports Server (NTRS)

    Leser, William P.; Yuan, Fuh-Gwo; Leser, William P.

    2013-01-01

    A method of numerically estimating dynamic Green's functions using the finite element method is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.

  16. Bone strength estimates relative to vertical ground reaction force discriminates women runners with stress fracture history.

    PubMed

    Popp, Kristin L; McDermott, William; Hughes, Julie M; Baxter, Stephanie A; Stovitz, Steven D; Petit, Moira A

    2017-01-01

    To determine differences in bone geometry, estimates of bone strength, muscle size and bone strength relative to load, in women runners with and without a history of stress fracture. We recruited 32 competitive distance runners aged 18-35, with (SFX, n=16) or without (NSFX, n=16) a history of stress fracture for this case-control study. Peripheral quantitative computed tomography (pQCT) was used to assess volumetric bone mineral density (vBMD, mg/mm 3 ), total (ToA) and cortical (CtA) bone areas (mm 2 ), and estimated compressive bone strength (bone strength index; BSI, mg/mm 4 ) at the distal tibia. ToA, CtA, cortical vBMD, and estimated strength (section modulus; Zp, mm 3 and strength strain index; SSIp, mm 3 ) were measured at six cortical sites along the tibia. Mean active peak vertical (pkZ) ground reaction forces (GRFs), assessed from a fatigue run on an instrumented treadmill, were used in conjunction with pQCT measurements to estimate bone strength relative to load (mm 2 /N∗kg -1 ) at all cortical sites. SSIp and Zp were 9-11% lower in the SFX group at mid-shaft of the tibia, while ToA and vBMD did not differ between groups at any measurement site. The SFX group had 11-17% lower bone strength relative to mean pkZ GRFs (p<0.05). These findings indicate that estimated bone strength at the mid-tibia and mean pkZ GRFs are lower in runners with a history of stress fracture. Bone strength relative to load is also lower in this same region suggesting that strength deficits in the middle 1/3 of the tibia and altered gait biomechanics may predispose an individual to stress fracture. Copyright © 2016. Published by Elsevier Inc.

  17. A 3-D model analysis of the slowdown and interannual variability in the methane growth rate from 1988 to 1997

    NASA Astrophysics Data System (ADS)

    Wang, James S.; Logan, Jennifer A.; McElroy, Michael B.; Duncan, Bryan N.; Megretskaia, Inna A.; Yantosca, Robert M.

    2004-09-01

    Methane has exhibited significant interannual variability with a slowdown in its growth rate beginning in the 1980s. We use a 3-D chemical transport model accounting for interannually varying emissions, transport, and sinks to analyze trends in CH4 from 1988 to 1997. Variations in CH4 sources were based on meteorological and country-level socioeconomic data. An inverse method was used to optimize the strengths of sources and sinks for a base year, 1994. We present a best-guess budget along with sensitivity tests. The analysis suggests that the sum of emissions from animals, fossil fuels, landfills, and wastewater estimated using Intergovernmental Panel on Climate Change default methodology is too high. Recent bottom-up estimates of the source from rice paddies appear to be too low. Previous top-down estimates of emissions from wetlands may be a factor of 2 higher than bottom-up estimates because of possible overestimates of OH. The model captures the general decrease in the CH4 growth rate observed from 1988 to 1997 and the anomalously low growth rates during 1992-1993. The slowdown in the growth rate is attributed to a combination of slower growth of sources and increases in OH. The economic downturn in the former Soviet Union and Eastern Europe made a significant contribution to the decrease in the growth rate of emissions. The 1992-1993 anomaly can be explained by fluctuations in wetland emissions and OH after the eruption of Mount Pinatubo. The results suggest that the recent slowdown of CH4 may be temporary.

  18. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  19. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  20. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    NASA Astrophysics Data System (ADS)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  1. Validation of attenuation, beam blockage, and calibration estimation methods using two dual polarization X band weather radars

    NASA Astrophysics Data System (ADS)

    Diederich, M.; Ryzhkov, A.; Simmer, C.; Mühlbauer, K.

    2011-12-01

    The amplitude a of radar wave reflected by meteorological targets can be misjudged due to several factors. At X band wavelength, attenuation of the radar beam by hydro meteors reduces the signal strength enough to be a significant source of error for quantitative precipitation estimation. Depending on the surrounding orography, the radar beam may be partially blocked when scanning at low elevation angles, and the knowledge of the exact amount of signal loss through beam blockage becomes necessary. The phase shift between the radar signals at horizontal and vertical polarizations is affected by the hydrometeors that the beam travels through, but remains unaffected by variations in signal strength. This has allowed for several ways of compensating for the attenuation of the signal, and for consistency checks between these variables. In this study, we make use of several weather radars and gauge network measuring in the same area to examine the effectiveness of several methods of attenuation and beam blockage corrections. The methods include consistency checks of radar reflectivity and specific differential phase, calculation of beam blockage using a topography map, estimating attenuation using differential propagation phase, and the ZPHI method proposed by Testud et al. in 2000. Results show the high effectiveness of differential phase in estimating attenuation, and potential of the ZPHI method to compensate attenuation, beam blockage, and calibration errors.

  2. Redox Disproportionation of Glucose as a Major Biosynthetic Energy Source

    NASA Technical Reports Server (NTRS)

    Weber, Arthur L.

    1996-01-01

    Previous studies have concluded that very little if any energy is required for the microbial biosynthesis of amino acids and lipids from glucose -- processes that yield almost as much ATP (adenosine triphosphate) as they consume. However, these studies did not establish the strength nor the nature of the energy source driving these biological transformations. To identify and estimate the strength of the energy source behind these processes, we calculated the free energy change due to the redox disproportionation of substrate carbon of (a) 26 redox-balanced fermentation reactions, and (b) the biosynthesis of amino acids, lipids, and nucleotides of E. coli from glucose. A plot of the negative free energy of these reactions per mmole of carbon as a function of the number of disproportionative electron transfers per mmol of carbon showed that the energy yields of these fermentations and biosyntheses were directly proportional to the degree of redox disproportionation of carbon. Since this linear relationship showed that redox disproportionation was the dominant energy source of these reactions, we were able to establish that amino acid and lipid biosynthesis obtained most of their energy from redox disproportionation (greater than 94%). In contrast nucleotide biosynthesis was not driven by redox disproportionation of carbon, and consequently depended completely on ATP for energy. This crucial and previously unrecognized role of sugars as an energy source of biosynthesis suggests that sugars were involved at the earliest stage in the origin of anabolic metabolism.

  3. Contribution of indoor and outdoor nitrogen dioxide to indoor air quality of wayside shops.

    PubMed

    Shuai, Jianfei; Yang, Wonho; Ahn, Hogi; Kim, Sunshin; Lee, Seokyong; Yoon, Sung-Uk

    2013-06-01

    Indoor nitrogen dioxide (NO₂) concentration is an important factor for personal exposure despite the wide distribution of its sources. Exposure to NO₂ may produce adverse health effects. The aims of this study were to characterize the indoor air quality of wayside shops using multiple NO₂ measurements, and to estimate the contribution of outdoor NO₂ sources such as vehicle emission to indoor air quality. Daily indoor and outdoor NO₂ concentrations were measured for 21 consecutive days in wayside shops (5 convenience stores, 5 coffee shops, and 5 restaurants). Contributions of outdoor NO₂ sources to indoor air quality were calculated with penetration factors and source strength factors by indoor mass balance model in winter and summer, respectively. Most wayside shops had significant differences in indoor and outdoor NO₂ concentrations both in winter and in summer. Indoor NO₂ concentrations in restaurants were twice more than those in convenience stores and coffee shops in winter. While outdoor NO₂ contributions in indoor convenience stores and coffee shops were dominant, indoor NO₂ contributions were dominant in restaurants. These could be explained that indoor NO₂ sources such as gas range and smoking mainly affect indoor concentrations comparing to outdoor sources such as vehicle emission. The indoor mass balance model by multiple measurements suggests that quantitative contribution of outdoor air on indoor air quality might be estimated without measurements of ventilation, indoor generation and decay rate.

  4. Io's Sodium Corona and Spatially Extended Cloud: A Consistent Flux Speed Distribution

    NASA Technical Reports Server (NTRS)

    Smyth, William H.; Combi, Michael R.

    1997-01-01

    For Io neutral cloud calculations, an SO2 source strength of approximately 4x10(exp 27) molecules/sec was determined by successfully matching the SO2(+) density profile near the satellite deduced from magnetometer data acquired by the Galileo spacecraft during its close flyby on December 7, 1995. The incomplete collision source velocity distribution for SO2 is the same as recently determined for the trace species atomic sodium by Smyth and Combi (1997). Estimates for the total energy loss rate (i.e. power) of O and S atoms escaping Io were also determined and imply a significant pickup current and a significant reduction in the local planetary magnetic field near Io.

  5. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  6. Use of MODIS-Derived Fire Radiative Energy to Estimate Smoke Aerosol Emissions over Different Ecosystems

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kaufman, Yoram J.

    2003-01-01

    Biomass burning is the main source of smoke aerosols and certain trace gases in the atmosphere. However, estimates of the rates of biomass consumption and emission of aerosols and trace gases from fires have not attained adequate reliability thus far. Traditional methods for deriving emission rates employ the use of emission factors e(sub x), (in g of species x per kg of biomass burned), which are difficult to measure from satellites. In this era of environmental monitoring from space, fire characterization was not a major consideration in the design of the early satellite-borne remote sensing instruments, such as AVHRR. Therefore, although they are able to provide fire location information, they were not adequately sensitive to variations in fire strength or size, because their thermal bands used for fire detection saturated at the lower end of fire radiative temperature range. As such, hitherto, satellite-based emission estimates employ proxy techniques using satellite derived fire pixel counts (which do not express the fire strength or rate of biomass consumption) or burned areas (which can only be obtained after the fire is over). The MODIS sensor, recently launched into orbit aboard EOS Terra (1999) and Aqua (2002) satellites, have a much higher saturation level and can, not only detect the fire locations 4 times daily, but also measures the at-satellite fire radiative energy (which is a measure of the fire strength) based on its 4 micron channel temperature. Also, MODIS measures the optical thickness of smoke and other aerosols. Preliminary analysis shows appreciable correlation between the MODIS-derived rates of emission of fire radiative energy and smoke over different regions across the globe. These relationships hold great promise for deriving emission coefficients, which can be used for estimating smoke aerosol emissions from MODIS active fire products. This procedure has the potential to provide more accurate emission estimates in near real-time, providing opportunities for various disaster management applications such as alerts, evacuation and, smoke dispersion forecasting.

  7. Optimal estimation of the optomechanical coupling strength

    NASA Astrophysics Data System (ADS)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  8. Modeling the Residual Strength of a Fibrous Composite Using the Residual Daniels Function

    NASA Astrophysics Data System (ADS)

    Paramonov, Yu.; Cimanis, V.; Varickis, S.; Kleinhofs, M.

    2016-09-01

    The concept of a residual Daniels function (RDF) is introduced. Together with the concept of Daniels sequence, the RDF is used for estimating the residual (after some preliminary fatigue loading) static strength of a unidirectional fibrous composite (UFC) and its S-N curve on the bases of test data. Usually, the residual strength is analyzed on the basis of a known S-N curve. In our work, an inverse approach is used: the S-N curve is derived from an analysis of the residual strength. This approach gives a good qualitive description of the process of decreasing residual strength and explanes the existence of the fatigue limit. The estimates of parameters of the corresponding regression model can be interpreted as estimates of parameters of the local strength of components of the UFC. In order to approach the quantitative experimental estimates of the fatigue life, some ideas based on the mathematics of the semiMarkovian process are employed. Satisfactory results in processing experimental data on the fatigue life and residual strength of glass/epoxy laminates are obtained.

  9. Infiltration of ambient PM 2.5 and levels of indoor generated non-ETS PM 2.5 in residences of four European cities

    NASA Astrophysics Data System (ADS)

    Hänninen, O. O.; Lebret, E.; Ilacqua, V.; Katsouyanni, K.; Künzli, N.; Srám, R. J.; Jantunen, M.

    Ambient fine particle (PM 2.5) concentrations are associated with premature mortality and other health effects. Urban populations spend a majority of their time in indoor environments, and thus exposures are modified by building envelopes. Ambient particles have been found to penetrate indoors very efficiently (penetration efficiency P≈1.0), where they are slowly removed by deposition, adsorption, and other mechanisms. Other particles are generated indoors, even in buildings with no obvious sources like combustion devices, cooking, use of aerosol products, etc.. The health effects of indoor generated particles are currently not well understood, and require information on concentrations and exposure levels. The current work apportions residential PM 2.5 concentrations measured in the EXPOLIS study to ambient and non-ambient fractions. The results show that the mean infiltration efficiency of PM 2.5 particles is similar in all four cities included in the analysis, ranging from 0.59 in Helsinki to 0.70 in Athens, with Basle and Prague in between. Mean residential indoor concentrations of ambient particles range from 7 (Helsinki) to 21 μg m -3 (Athens). Based on PM 2.5 decay rates estimated in the US, estimates of air exchange rates and indoor source strengths were calculated. The mean air exchange rate was highest in Athens and lowest in Prague. Indoor source strengths were similar in Athens, Basle and Prague, but lower in Helsinki. Some suggestions of possible determinants of indoor generated non-ETS PM 2.5 were acquired using regression analysis. Building materials and other building and family characteristics were associated with the indoor generated particle levels. A significant fraction of the indoor concentrations remained unexplained.

  10. Estimation of fatigue strength enhancement for carburized and shot-peened gears

    NASA Astrophysics Data System (ADS)

    Inoue, Katsumi; Kato, Masana

    1994-05-01

    An experimental formula has been proposed to estimate the bending fatigue strength of carburized gears from the hardness and the residual stress. The derivation of the formula is briefly reviewed, and the effectiveness of the formula is demonstrated in this article. The comparison with many test results for carburized and shot-peened gears verifies that the formula is effective for the approximate estimation of the fatigue strength. The formula quantitatively shows a way of enhancing fatigue strength, i.e., the increase of hardness and residual stress at the fillet. The strength is enhanced about 300 MPa by an appropriate shot peening, and it can be improved still more by the surface removal by electropolishing.

  11. Simulation of fruit-set and trophic competition and optimization of yield advantages in six Capsicum cultivars using functional-structural plant modelling.

    PubMed

    Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P

    2011-04-01

    Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source and vegetative sink strength on fruit-set and fruit weight. When the source-sink ratio at fruit-set decreased, the number of fruit retained on the plant increased competition for assimilates with vegetative organs. Therefore, total plant and vegetative dry weights decreased, especially for large-fruited cultivars. Optimization study showed that temporal heterogeneity of fruit-set and ripening was predicted to be reduced when fruits were harvested earlier. Furthermore, there was a 20 % increase in the number of extra fruit set.

  12. Variability of Springtime Transpacific Pollution Transport During 2000-2006: The INTEX-5 Mission in the Context of Previous Years

    NASA Technical Reports Server (NTRS)

    Pfister, G. G.; Emmons, L. K.; Edwards, D. P.; Arellano, A.; Sachse, G.; Campos, T.

    2010-01-01

    We analyze the transport of pollution across the Pacific during the NASA INTEX-B (Intercontinental Chemical Transport Experiment Part 8) campaign in spring 2006 and examine how this year compares to the time period for 2000 through 2006. In addition to aircraft measurements of carbon monoxide (CO) collected during INTEX-B, we include in this study multi-year satellite retrievals of CO from the Measurements of Pollution in the Troposphere (MOPITT) instrument and simulations from the chemistry transport model MOZART-4. Model tracers are used to examine the contributions of different source regions and source types to pollution levels over the Pacific. Additional modeling studies are performed to separate the impacts of inter-annual variability in meteorology and .dynamics from changes in source strength. interannual variability in the tropospheric CO burden over the Pacific and the US as estimated from the MOPITT data range up to 7% and a somewhat smaller estimate (5%) is derived from the model. When keeping the emissions in the model constant between years, the year-to-year changes are reduced (2%), but show that in addition to changes in emissions, variable meteorological conditions also impact transpacific pollution transport. We estimate that about 113 of the variability in the tropospheric CO loading over the contiguous US is explained by changes in emissions and about 213 by changes in meteorology and transport. Biomass burning sources are found to be a larger driver for inter-annual variability in the CO loading compared to fossil and biofuel sources or photochemical CO production even though their absolute contributions are smaller. Source contribution analysis shows that the aircraft sampling during INTEX-B was fairly representative of the larger scale region, but with a slight bias towards higher influence from Asian contributions.

  13. On the Transportability of Ms Versus Yield Relationships

    NASA Astrophysics Data System (ADS)

    Patton, H. J.; Randall, G. E.

    2014-12-01

    A physical basis for transporting magnitude (M) versus yield (W) relationships between test sites is essential for improved yield estimation. A case in point is an Ms relationship transported from the Nevada Test Site, which gives W estimates of North Korean tests roughly a factor of two larger than mb-based estimates. In order to test the performance of this relation, we transport it to Semipalatinsk (STS) where W and source media information are available. The transported Ms - W relation was developed for water-saturated tuff/rhyolite, and Rayleigh-wave generation was corrected for the effects of source medium compaction due to spall slapdown. Coupling variations with burial depth and the effects of compaction, both functions of W in tuff/rhyolite, are mitigated for shots in hard rock. As such, it is satisfying that Ms for STS shots are seen to scale similarly as the transported relation, ~0.8log[W]. However, they are offset downward by 0.4 - 0.5 magnitude units. A negative offset is consistent with the effects of tectonic release, but research has shown the inadequacy of double-couple (DC) mechanisms to improve correlations of moment magnitude Mw - W relations. Source medium properties are not a factor because larger amplitude Green's functions in weak rock trade off with reduced source strength relative to explosions in hard rock. In this paper, the role of late-time damage due to non-linear, free-surface interactions, modeled with an Mzz source, is explored. Combining this source with DC mechanisms, we show the non-uniqueness of models to satisfy long-period surface-wave observations, and investigate overcoming this difficulty with full waveform modeling of Borovoye seismograms.

  14. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  15. Cross-sectional association between muscle strength and self-reported physical function in 195 hip osteoarthritis patients.

    PubMed

    Hall, Michelle; Wrigley, Tim V; Kasza, Jessica; Dobson, Fiona; Pua, Yong Hao; Metcalf, Ben R; Bennell, Kim L

    2017-02-01

    This study aimed to evaluate associations between strength of selected hip and knee muscles and self-reported physical function, and their clinical relevance, in men and women with hip osteoarthritis (OA). Cross-sectional data from 195 participants with symptomatic hip OA were used. Peak isometric torque of hip extensors, flexors, and abductors, and knee extensors were measured, along with physical function using the Western Ontario and McMaster Universities Osteoarthritis Index questionnaire. Separate linear regressions in men and women were used to determine the association between strength and physical function accounting for age, pain, and radiographic disease severity. Subsequently, magnitudes of strength associated with estimates of minimal clinically important improvement (MCII) in physical function were estimated according to severity of difficulty with physical function. For men, greater strength of the hip extensors, hip flexors and knee extensors were each associated with better physical function. For women, greater muscle strength of all tested muscles were each associated with better physical function. For men and women, increases in muscle strength between 17-32%, 133-223%, and 151-284% may be associated with estimates of MCII in physical function for those with mild, moderate, and severe physical dysfunction, respectively. Greater isometric strength of specific hip and thigh muscle groups may be associated with better self-reported physical function in men and women. In people with mild physical dysfunction, an estimate of MCII in physical function may be associated with attainable increases in strength. However, in patients with more severe dysfunction, greater and perhaps unattainable strength increases may be associated with an estimate of MCII in physical function. Longitudinal studies are required to validate these observations. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Development of estimation system of knee extension strength using image features in ultrasound images of rectus femoris

    NASA Astrophysics Data System (ADS)

    Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi

    2016-04-01

    The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.

  17. CADDIS Volume 2. Sources, Stressors and Responses: Ionic Strength

    EPA Pesticide Factsheets

    Introduction to the ionic strength module, when to list ionic strength as a candidate cause, ways to measure ionic strength, simple and detailed conceptual diagrams for ionic strength, ionic strength module references and literature reviews.

  18. New Methods For Interpretation Of Magnetic Gradient Tensor Data Using Eigenalysis And The Normalized Source Strength

    NASA Astrophysics Data System (ADS)

    Clark, D.

    2012-12-01

    In the future, acquisition of magnetic gradient tensor data is likely to become routine. New methods developed for analysis of magnetic gradient tensor data can also be applied to high quality conventional TMI surveys that have been processed using Fourier filtering techniques, or otherwise, to calculate magnetic vector and tensor components. This approach is, in fact, the only practical way at present to analyze vector component data, as measurements of vector components are seriously afflicted by motion noise, which is not as serious a problem for gradient components. In many circumstances, an optimal approach to extracting maximum information from magnetic surveys would be to combine analysis of measured gradient tensor data with vector components calculated from TMI measurements. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for a number of elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, horizontal line current and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalized source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetization direction for these sources (and only very weakly dependent on magnetization direction in general). In combination the NSS and its vector gradient enable estimation of the Euler structural index, thereby constraining source geometry, and determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Once source locations are determined, information of source magnetizations can be obtained by simple linear inversion of measured or calculated vector and/or tensor data. Inversions based on the vector gradient of the NSS over the Tallawang magnetite deposit in central New South Wales obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Inverted magnetizations are consistent with magnetic property measurements on drill core samples from this deposit. Similarly, inversions of calculated tensor data over the Mount Leyshold gold-mineralized porphyry system in Queensland yield good estimates of the centroid location, total magnetic moment and magnetization direction of the magnetite-bearing potassic alteration zone that are consistent with geological and petrophysical information.

  19. In-air calibration of an HDR 192Ir brachytherapy source using therapy ion chambers.

    PubMed

    Patel, Narayan Prasad; Majumdar, Bishnu; Vijiyan, V; Hota, Pradeep K

    2005-01-01

    The Gammamed Plus 192Ir high dose rate brachytherapy sources were calibrated using the therapy level ionization chambers (0.1 and 0.6 cc) and the well-type chamber. The aim of the present study was to assess the accuracy and suitability of use of the therapy level chambers for in-air calibration of brachytherapy sources in routine clinical practice. In a calibration procedure using therapy ion chambers, the air kerma was measured at several distances from the source in a specially designed jig. The room scatter correction factor was determined by superimposition method based on the inverse square law. Various other correction factors were applied on measured air kerma values at multiple distances and mean value was taken to determine the air kerma strength of the source. The results from four sources, the overall mean deviation between measured and quoted source strength by manufacturers was found -2.04% (N = 18) for well-type chamber. The mean deviation for the 0.6 cc chamber with buildup cap was found -1.48 % (N = 19) and without buildup cap was 0.11% (N = 22). The mean deviation for the 0.1 cc chamber was found -0.24% (N = 27). Result shows that probably the excess ionization in case of 0.6 cc therapy ion chamber without buildup cap was estimated about 2.74% and 1.99% at 10 and 20 cm from the source respectively. Scattered radiation measured by the 0.1 cc and 0.6 cc chamber at 10 cm measurement distance was about 1.1% and 0.33% of the primary radiation respectively. The study concludes that the results obtained with therapy level ionization chambers were extremely reproducible and in good agreement with the results of the well-type ionization chamber and source supplier quoted value. The calibration procedure with therapy ionization chambers is equally competent and suitable for routine calibration of the brachytherapy sources.

  20. Satellite lidar and radar: Key components of the future climate observing system

    NASA Astrophysics Data System (ADS)

    Winker, D. M.

    2017-12-01

    Cloud feedbacks represent the dominant source of uncertainties in estimates of climate sensitivity and aerosols represent the largest source of uncertainty in climate forcing. Both observation of long-term changes and observational constraints on the processes responsible for those changes are necessary. The existing 30-year record of passive satellite observations has not yet provided constraints to significantly reduce these uncertainties, though. We now have more than a decade of experience with active sensors flying in the A-Train. These new observations have demonstrated the strengths of active sensors and the benefits of continued and more advanced active sensors. This talk will discuss the multiple roles for active sensors as an essential component of a global climate observing system.

  1. A comprehensive experimental characterization of the iPIX gamma imager

    NASA Astrophysics Data System (ADS)

    Amgarou, K.; Paradiso, V.; Patoz, A.; Bonnet, F.; Handley, J.; Couturier, P.; Becker, F.; Menaa, N.

    2016-08-01

    The results of more than 280 different experiments aimed at exploring the main features and performances of a newly developed gamma imager, called iPIX, are summarized in this paper. iPIX is designed to quickly localize radioactive sources while estimating the ambient dose equivalent rate at the measurement point. It integrates a 1 mm thick CdTe detector directly bump-bonded to a Timepix chip, a tungsten coded-aperture mask, and a mini RGB camera. It also represents a major technological breakthrough in terms of lightness, compactness, usability, response sensitivity, and angular resolution. As an example of its key strengths, an 241Am source with a dose rate of only few nSv/h can be localized in less than one minute.

  2. Positron radiography of ignition-relevant ICF capsules

    NASA Astrophysics Data System (ADS)

    Williams, G. J.; Chen, Hui; Field, J. E.; Landen, O. L.; Strozzi, D. J.

    2017-12-01

    Laser-generated positrons are evaluated as a probe source to radiograph in-flight ignition-relevant inertial confinement fusion capsules. Current ultraintense laser facilities are capable of producing 2 × 1012 relativistic positrons in a narrow energy bandwidth and short time duration. Monte Carlo simulations suggest that the unique characteristics of such positrons allow for the reconstruction of both capsule shell radius and areal density between 0.002 and 2 g/cm2. The energy-downshifted positron spectrum and angular scattering of the source particles are sufficient to constrain the conditions of the capsule between preshot and stagnation. We evaluate the effects of magnetic fields near the capsule surface using analytic estimates where it is shown that this diagnostic can tolerate line integrated field strengths of 100 T mm.

  3. CADDIS Volume 2. Sources, Stressors and Responses: Ionic Strength - Simple Conceptual Diagram

    EPA Pesticide Factsheets

    Introduction to the ionic strength module, when to list ionic strength as a candidate cause, ways to measure ionic strength, simple and detailed conceptual diagrams for ionic strength, ionic strength module references and literature reviews.

  4. CADDIS Volume 2. Sources, Stressors and Responses: Ionic Strength - Detailed Conceptual Diagram

    EPA Pesticide Factsheets

    Introduction to the ionic strength module, when to list ionic strength as a candidate cause, ways to measure ionic strength, simple and detailed conceptual diagrams for ionic strength, ionic strength module references and literature reviews.

  5. The biomass burning contribution to climate-carbon-cycle feedback

    NASA Astrophysics Data System (ADS)

    Harrison, Sandy P.; Bartlein, Patrick J.; Brovkin, Victor; Houweling, Sander; Kloster, Silvia; Prentice, I. Colin

    2018-05-01

    Temperature exerts strong controls on the incidence and severity of fire. All else equal, warming is expected to increase fire-related carbon emissions, and thereby atmospheric CO2. But the magnitude of this feedback is very poorly known. We use a single-box model of the land biosphere to quantify this positive feedback from satellite-based estimates of biomass burning emissions for 2000-2014 CE and from sedimentary charcoal records for the millennium before the industrial period. We derive an estimate of the centennial-scale feedback strength of 6.5 ± 3.4 ppm CO2 per degree of land temperature increase, based on the satellite data. However, this estimate is poorly constrained, and is largely driven by the well-documented dependence of tropical deforestation and peat fires (primarily anthropogenic) on climate variability patterns linked to the El Niño-Southern Oscillation. Palaeo-data from pre-industrial times provide the opportunity to assess the fire-related climate-carbon-cycle feedback over a longer period, with less pervasive human impacts. Past biomass burning can be quantified based on variations in either the concentration and isotopic composition of methane in ice cores (with assumptions about the isotopic signatures of different methane sources) or the abundances of charcoal preserved in sediments, which reflect landscape-scale changes in burnt biomass. These two data sources are shown here to be coherent with one another. The more numerous data from sedimentary charcoal, expressed as normalized anomalies (fractional deviations from the long-term mean), are then used - together with an estimate of mean biomass burning derived from methane isotope data - to infer a feedback strength of 5.6 ± 3.2 ppm CO2 per degree of land temperature and (for a climate sensitivity of 2.8 K) a gain of 0.09 ± 0.05. This finding indicates that the positive carbon cycle feedback from increased fire provides a substantial contribution to the overall climate-carbon-cycle feedback on centennial timescales. Although the feedback estimates from palaeo- and satellite-era data are in agreement, this is likely fortuitous because of the pervasive influence of human activities on fire regimes during recent decades.

  6. Train-borne Measurements of Enhanced Wet Season Methane Emissions in Northern Australia - Implications for Australian Tropical Wetland Emissions

    NASA Astrophysics Data System (ADS)

    Deutscher, N. M.; Griffith, D. W.; Paton-Walsh, C.

    2008-12-01

    We present the first transect measurements of CH4, CO2, CO and N2O taken on the Ghan railway travelling on a N-S transect of the Australian continent between Adelaide (34.9°S, 138.6°E) and Darwin (12.5°S, 130.9°E). The Ghan crosses Australia from the mainly agricultural mid-latitude south through the arid interior to the wet-dry tropical savannah south of and around Darwin. In the 2008 wet season (February) we observed a significant latitudinal gradient of CH4 increasing towards the north. The same pattern was observed in the late 2008 wet season (March-April), with a smaller latitudinal gradient. These will be compared with a dry season transect, to be undertaken in September/October 2008. The Air Pollution Model (TAPM), a regional scale prognostic meteorological model, is used to estimate the surface methane source strength required to explain the observed latitudinal gradient in CH4 in the wet season, and investigate the source type. Fluxes from cattle and termites together contribute up to 25% of the enhancements seen, leaving wetlands as the major source of wet season methane in the Australian tropics. Wetlands are the largest natural source of methane to the atmosphere, and tropical wetlands are responsible for the majority of the interannual variation in methane source strength. We attempt to quantify the annual methane flux contributed by anaerobic organic breakdown due to wet- season flooding in tropical Northern Territory.

  7. Fatigue Strength Estimation Based on Local Mechanical Properties for Aluminum Alloy FSW Joints

    PubMed Central

    Sillapasa, Kittima; Mutoh, Yoshiharu; Miyashita, Yukio; Seo, Nobushiro

    2017-01-01

    Overall fatigue strengths and hardness distributions of the aluminum alloy similar and dissimilar friction stir welding (FSW) joints were determined. The local fatigue strengths as well as local tensile strengths were also obtained by using small round bar specimens extracted from specific locations, such as the stir zone, heat affected zone, and base metal. It was found from the results that fatigue fracture of the FSW joint plate specimen occurred at the location of the lowest local fatigue strength as well as the lowest hardness, regardless of microstructural evolution. To estimate the fatigue strengths of aluminum alloy FSW joints from the hardness measurements, the relationship between fatigue strength and hardness for aluminum alloys was investigated based on the present experimental results and the available wide range of data from the references. It was found as: σa (R = −1) = 1.68 HV (σa is in MPa and HV has no unit). It was also confirmed that the estimated fatigue strengths were in good agreement with the experimental results for aluminum alloy FSW joints. PMID:28772543

  8. Fatigue Strength Estimation Based on Local Mechanical Properties for Aluminum Alloy FSW Joints.

    PubMed

    Sillapasa, Kittima; Mutoh, Yoshiharu; Miyashita, Yukio; Seo, Nobushiro

    2017-02-15

    Overall fatigue strengths and hardness distributions of the aluminum alloy similar and dissimilar friction stir welding (FSW) joints were determined. The local fatigue strengths as well as local tensile strengths were also obtained by using small round bar specimens extracted from specific locations, such as the stir zone, heat affected zone, and base metal. It was found from the results that fatigue fracture of the FSW joint plate specimen occurred at the location of the lowest local fatigue strength as well as the lowest hardness, regardless of microstructural evolution. To estimate the fatigue strengths of aluminum alloy FSW joints from the hardness measurements, the relationship between fatigue strength and hardness for aluminum alloys was investigated based on the present experimental results and the available wide range of data from the references. It was found as: σ a ( R = -1) = 1.68 HV ( σ a is in MPa and HV has no unit). It was also confirmed that the estimated fatigue strengths were in good agreement with the experimental results for aluminum alloy FSW joints.

  9. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  10. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  11. Correlated flux densities from VLBI observations with the DSN

    NASA Technical Reports Server (NTRS)

    Coker, R. F.

    1992-01-01

    Correlated flux densities of extragalactic radio sources in the very long baseline interferometry (VLBI) astrometric catalog are required for the VLBI tracking of Galileo, Mars Observer, and future missions. A system to produce correlated and total flux density catalogs was developed to meet these requirements. A correlated flux density catalog of 274 sources, accurate to about 20 percent, was derived from more than 5000 DSN VLBI observations at 2.3 GHz (S-band) and 8.4 GHz (X-band) using 43 VLBI radio reference frame experiments during the period 1989-1992. Various consistency checks were carried out to ensure the accuracy of the correlated flux densities. All observations were made on the California-Spain and California-Australia DSN baselines using the Mark 3 wideband data acquisition system. A total flux density catalog, accurate to about 20 percent, with data on 150 sources, was also created. Together, these catalogs can be used to predict source strengths to assist in the scheduling of VLBI tracking passes. In addition, for those sources with sufficient observations, a rough estimate of source structure parameters can be made.

  12. Constraints on Smoke Injection Height, Source Strength, and Transports from MISR and MODIS

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.; Petrenko, Mariya; Val Martin, Maria; Chin, Mian

    2014-01-01

    The AeroCom BB (Biomass Burning) Experiment AOD (Aerosol Optical Depth) motivation: We have a substantial set of satellite wildfire plume AOD snapshots and injection heights to help calibrate model/inventory performance; We are 1) adding more fire source-strength cases 2) using MISR to improve the AOD constrains and 3) adding 2008 global injection heights; We selected GFED3-daily due to good overall source strength performance, but any inventory can be tested; Joint effort to test multiple, global models, to draw robust BB injection height and emission strength conclusions. We provide satellite-based injection height and smoke plume AOD climatologies.

  13. Estimation of Confined Peak Strength of Crack-Damaged Rocks

    NASA Astrophysics Data System (ADS)

    Bahrani, Navid; Kaiser, Peter K.

    2017-02-01

    It is known that the unconfined compressive strength of rock decreases with increasing density of geological features such as micro-cracks, fractures, and veins both at the laboratory specimen and rock block scales. This article deals with the confined peak strength of laboratory-scale rock specimens containing grain-scale strength dominating features such as micro-cracks. A grain-based distinct element model, whereby the rock is simulated with grains that are allowed to deform and break, is used to investigate the influence of the density of cracks on the rock strength under unconfined and confined conditions. A grain-based specimen calibrated to the unconfined and confined strengths of intact and heat-treated Wombeyan marble is used to simulate rock specimens with varying crack densities. It is demonstrated how such cracks affect the peak strength, stress-strain curve and failure mode with increasing confinement. The results of numerical simulations in terms of unconfined and confined peak strengths are used to develop semi-empirical relations that relate the difference in strength between the intact and crack-damaged rocks to the confining pressure. It is shown how these relations can be used to estimate the confined peak strength of a rock with micro-cracks when the unconfined and confined strengths of the intact rock and the unconfined strength of the crack-damaged rock are known. This approach for estimating the confined strength of crack-damaged rock specimens, called strength degradation approach, is then verified by application to published laboratory triaxial test data.

  14. Sound transmission in ducts containing nearly choked flows

    NASA Technical Reports Server (NTRS)

    Callegari, A. J.; Myers, M. K.

    1979-01-01

    The nonlinear theory previously developed by the authors (1977, 1978) is used to obtain numerical results for sound transmission through a nearly choked throat in a variable-area duct. Parametric studies are performed for different source locations, strengths and frequencies. It is shown that the nonlinear interactions in the throat region generate superharmonics of the fundamental (source) frequency throughout the duct. The amplitudes of these superharmonics increase as the source parameters (frequency and strength) are increased toward values leading to acoustic shocks. For a downstream source, superharmonics carry about 20% of the total acoustic power as shocking conditions are approached. For the source strength levels and frequencies considered, streaming effects are negligible.

  15. Statistical methods for thermonuclear reaction rates and nucleosynthesis simulations

    NASA Astrophysics Data System (ADS)

    Iliadis, Christian; Longland, Richard; Coc, Alain; Timmes, F. X.; Champagne, Art E.

    2015-03-01

    Rigorous statistical methods for estimating thermonuclear reaction rates and nucleosynthesis are becoming increasingly established in nuclear astrophysics. The main challenge being faced is that experimental reaction rates are highly complex quantities derived from a multitude of different measured nuclear parameters (e.g., astrophysical S-factors, resonance energies and strengths, particle and γ-ray partial widths). We discuss the application of the Monte Carlo method to two distinct, but related, questions. First, given a set of measured nuclear parameters, how can one best estimate the resulting thermonuclear reaction rates and associated uncertainties? Second, given a set of appropriate reaction rates, how can one best estimate the abundances from nucleosynthesis (i.e., reaction network) calculations? The techniques described here provide probability density functions that can be used to derive statistically meaningful reaction rates and final abundances for any desired coverage probability. Examples are given for applications to s-process neutron sources, core-collapse supernovae, classical novae, and Big Bang nucleosynthesis.

  16. Quantifying root-reinforcement of river bank soils by four Australian tree species

    NASA Astrophysics Data System (ADS)

    Docker, B. B.; Hubble, T. C. T.

    2008-08-01

    The increased shear resistance of soil due to root-reinforcement by four common Australian riparian trees, Casuarina glauca, Eucalyptus amplifolia, Eucalyptus elata and Acacia floribunda, was determined in-situ with a field shear-box. Root pull-out strengths and root tensile-strengths were also measured and used to evaluate the utility of the root-reinforcement estimation models that assume simultaneous failure of all roots at the shear plane. Field shear-box results indicate that tree roots fail progressively rather than simultaneously. Shear-strengths calculated for root-reinforced soil assuming simultaneous root failure, yielded values between 50% and 215% higher than directly measured shear-strengths. The magnitude of the overestimate varies among species and probably results from differences in both the geometry of the root-system and tensile strengths of the root material. Soil blocks under A. floribunda which presents many, well-spread, highly-branched fine roots with relatively higher tensile strength, conformed most closely with root model estimates; whereas E. amplifolia, which presents a few, large, unbranched vertical roots, concentrated directly beneath the tree stem and of relatively low tensile strength, deviated furthest from model-estimated shear-strengths. These results suggest that considerable caution be exercised when applying estimates of increased shear-strength due to root-reinforcement in riverbank stability modelling. Nevertheless, increased soil shear strength provided by tree roots can be calculated by knowledge of the Root Area Ratio ( RAR) at the shear plane. At equivalent RAR values, A. floribunda demonstrated the greatest earth reinforcement potential of the four species studied.

  17. Relationships between muscular strength and the level of energy sources in the muscle.

    PubMed

    Wit, A; Juskiak, R; Wit, B; Zieliński, J R

    1978-01-01

    Relationships between muscular strength and the level of energy sources in the muscle. Acta Physiol. Pol., 1978, 29 (2): 139--151. An attempt was made to establish a relationship between the post-excercise changes in the level of anaerobic energy sources and changes in the muscular strength. The gastrocnemius muscle of Wistar rats was examined. The muscle strength was measured by the resistance tensometry. In muscle specimens ATP, CP and glycogen contents were determined. It was demonstrated that changes in the post-excersise muscle response to electric stimulus have a phasic character resembling the overcompensation curve. The percent changes in the content of anaerobic energy sources in the muscle after contractions varying in duration suggests also overcompensation the muscle content of these substances. The parallelity between the time of appearance of peak overcompensation phase in the muscle strength and in the post-exercise level of musclar ATP, CP and glycogen contents suggest a casual relationship between these changes.

  18. Approach to identifying pollutant source and matching flow field

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang

    2013-07-01

    Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.

  19. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  20. Astronautic Structures Manual, Volume 3

    NASA Technical Reports Server (NTRS)

    1975-01-01

    This document (Volumes I, II, and III) presents a compilation of industry-wide methods in aerospace strength analysis that can be carried out by hand, that are general enough in scope to cover most structures encountered, and that are sophisticated enough to give accurate estimates of the actual strength expected. It provides analysis techniques for the elastic and inelastic stress ranges. It serves not only as a catalog of methods not usually available, but also as a reference source for the background of the methods themselves. An overview of the manual is as follows: Section A is a general introduction of methods used and includes sections on loads, combined stresses, and interaction curves; Section B is devoted to methods of strength analysis; Section C is devoted to the topic of structural stability; Section D is on thermal stresses; Section E is on fatigue and fracture mechanics; Section F is on composites; Section G is on rotating machinery; and Section H is on statistics. These three volumes supersede Volumes I and II, NASA TM X-60041 and NASA TM X-60042, respectively.

  1. Assessment of Carrying Capacity of Timber Element Using SBRA Method

    NASA Astrophysics Data System (ADS)

    Kraus, Michal

    2017-10-01

    Wood as a building material has a significant perspective in the context of nonrenewable energy sources and production of greenhouse gas emissions. The subject of this paper is to verify the carrying capacity of the timber element using the probabilistic method Simulation Based Reliability Assessment (SBRA). The simulation is performed for one million cycles. Key factors decreasing the strength of wooden material at the time include the duration of the loads, and combinations thereof. Inconsiderable factor affecting the strength of wood is also the humidity. Continuous beam with three fields (length 15 m, glued laminated timber, and strength class GL 36 according to the DIN EN 1194) is placed in an environment with a thermal-humidity regime of the 2nd class according to the EC 5. Average life of carrying timber structure is estimated to be 50 years. The simulation results show that there is no risk of failure of wood during the first year. The probability of failure is common in the 10 years of its life. Then, wooden element already meets only a reduced level of reliability.

  2. Impact Delivery of Reduced Greenhouse Gases on Early MARS

    NASA Technical Reports Server (NTRS)

    Haberle, R. M.; Zahnle, K.

    2017-01-01

    While there is abundant evidence for flowing liquid water on the ancient Martian surface, a widely accepted greenhouse mechanism for explaining this in the presence of a faint young sun has yet to emerge. Gases such as NH3, CO2 alone, SO2, clouds, and CH4, have sustainability issues or limited greenhouse power. Recently, Ramirez et al. proposed that CO2-H2 atmospheres, through collision induced absorptions (CIA), could solve the problem if large amounts are present (1.3-4 bars of CO2, 50-20% H2). However, they had to estimate the strength of the H2- CO2 interaction from the measured strength of the H2- N2 interaction. Recent ab initio calculations show that the strength of CO2-H2 CIA is greater than Ramirez et al. assumed. Wordsworth et al. also calculated the absorption coefficients for CO2-CH4 CIA and show that on early Mars a 0.5 bar CO2 atmosphere with percent levels of H2 or CH4 can raise mean annual temperatures by tens of degrees Kelvin. Freezing temperatures can be reached in atmospheres containing 1-2 bars of CO2 and 2-10% H2 and CH4. The new work demonstrates that less CO2 and reduced gases are needed than Ramirez et al. originally proposed, which improves prospects for their hypothesis. If thick weakly reducing atmospheres are the solution to the faint young sun paradox, then plausible mechanisms must be found to generate and sustain the required concentrations of H2 and CH4. Possible sources of reducing gases include volcanic outgassing, serpentinization, and impact delivery; sinks include photolysis, oxidation, and hydrogen escape. The viability of the reduced greenhouse hypothesis depends, therefore, on the strength of these sources and sinks.

  3. Radiating dipole model of interference induced in spacecraft circuitry by surface discharges

    NASA Technical Reports Server (NTRS)

    Metz, R. N.

    1984-01-01

    Spacecraft in geosynchronous orbit can be charged electrically to high voltages by interaction with the space plasma. Differential charging of spacecraft surfaces leads to arc and blowoff discharging. The discharges are thought to upset interior, computer-level circuitry. In addition to capacitive or electrostatic effects, significant inductive and less significant radiative effects of these discharges exist and can be modeled in a dipole approximation. Flight measurements suggest source frequencies of 5 to 50 MHz. Laboratory tests indicate source current strengths of several amperes. Electrical and magnetic fields at distances of many centimeters from such sources can be as large as tens of volts per meter and meter squared, respectively. Estimates of field attenuation by spacecraft walls and structures suggest that interior fields may be appreciable if electromagnetic shielding is much thinner than about 0.025 mm (1 mil). Pickup of such fields by wires and cables interconnecting circuit components could be a source of interference signals of several volts amplitude.

  4. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  5. How implicitly activated and explicitly acquired knowledge contribute to the effectiveness of retrieval cues.

    PubMed

    Nelson, Douglas L; Fisher, Serena L; Akirmak, Umit

    2007-12-01

    The extralist cued recall task simulates everyday reminding because a memory is encoded on the fly and retrieved later by an unexpected cue. Target words are studied individually, and recall is cued by associatively related words having preexisting forward links to them. In Experiments 1 and 2, forward cue-to-target and backward target-to-cue strengths were varied over an extended range in order to determine how these two sources of strength are related and which source has a greater effect. Forward and backward strengths had additive effects on recall, with forward strength having a consistently larger effect. The PIER2 model accurately predicted these findings, but a plausible generation-recognition version of the model, called PIER.GR, could not. In Experiment 3, forward and backward strengths, level of processing, and study time were varied in order to determine how preexisting lexical knowledge is related to knowledge acquired during the study episode. The main finding indicates that preexisting knowledge and episodic knowledge have additive effects on extralist cued recall. PIER2 can explain these findings because it assumes that these sources of strength contribute independently to recall, whereas the eSAM model cannot explain the findings because it assumes that the sources of strength are multiplicatively related.

  6. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting

    PubMed Central

    2017-01-01

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921

  7. THE SEARCH FOR CELESTIAL POSITRONIUM VIA THE RECOMBINATION SPECTRUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, S. C.; Bland-Hawthorn, J., E-mail: sce@physics.usyd.edu.a, E-mail: jbh@physics.usyd.edu.a

    2009-12-10

    Positronium is a short-lived atom consisting of a bound electron-positron pair. In the triplet state, when the spins of both particles are parallel, radiative recombination lines will be emitted prior to annihilation. The existence of celestial positronium is revealed through gamma-ray observations of its annihilation products. These observations, however, have intrinsically low angular resolution. In this paper, we examine the prospects for detecting the positronium recombination spectrum. Such observations have the potential to reveal discrete sources of e {sup +} for the first time and will allow the acuity of optical telescopes and instrumentation to be applied to observations ofmore » high-energy phenomena. We review the theory of the positronium recombination spectrum and provide formulae to calculate expected line strengths from the e {sup +} production rate and for different conditions in the interstellar medium. We estimate the positronium emission line strengths for several classes of Galactic and extragalactic sources. These are compared to current observational limits and to current and future sensitivities of optical and infrared instrumentation. We find that observations of the Psalpha line should soon be possible due to recent advances in NIR spectroscopy.« less

  8. Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting

    PubMed Central

    Husen, Mohd Nizam; Lee, Sukhan

    2016-01-01

    A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis. PMID:27845711

  9. Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.

    PubMed

    Husen, Mohd Nizam; Lee, Sukhan

    2016-11-11

    A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.

  10. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  11. N2O Source Strength of Tropical Rain Forests: From the Site to the Global Scale

    NASA Astrophysics Data System (ADS)

    Kiese, R.; Werner, C.; Butterbach-Bahl, K.

    2006-12-01

    In contrast to the significant importance of tropical rain forest ecosystems as one of the major single sources within the global atmospheric N2O budget (2.2 3.7 Tg N y-1, regional and global estimates of their N2O source strength are still limited and highly uncertain. However, accurate quantification of sources and sinks of greenhouse gases like CO2, N2O and CH4 for natural, agricultural and forest ecosystems is crucial to our understanding of land use change effects on global climate change. At present, up-scaling approaches which link detailed geographic information systems (GIS) to mechanistic biochemical models are seen as a promising tool to contribute towards more reliable estimates of biogenic sources of N2O, e.g. tropical rain forest ecosystems. In our study we further developed and tested the PnET-N-DNDC model using Bayesian calibration techniques based on detailed N2O emission data of two recently conducted field campaigns in African (Kenya) and Asian (SE-China) tropical forest ecosystems and additional datasets from earlier own field campaigns or the literature. For global upscaling of N2O emissions an extensive GIS database was constructed holding all necessary parameters (climate ECWMF ERA 40; soil: FAO, vegetation: LPJ-DGVM simulation) in spatial and temporal resolution for initializing and driving the further developed biogeochemical model at a grid size of 0.25°x0.25°. We calculated global N2O emissions inventories for the years 1991 to 2001, and found a general agreement of the simulated flux ranges with reported N2O emissions from tropical forest ecosystems worldwide. According to our simulations, tropical rainforest soils are indeed a significant source of atmospheric N2O ranging from 1.1 2.2 Tg in dependence from the simulated year. Notably, related to differences in environmental conditions, N2O emissions varied considerably within the tropical belt. Furthermore, our simulations revealed a pronounced inter-annual variability of N2O emissions mainly driven by differences in weather conditions (e.g. distribution and total amount of rainfall) across years, which may be mirrored in atmospheric N2O concentrations.

  12. A basal magma ocean dynamo to explain the early lunar magnetic field

    NASA Astrophysics Data System (ADS)

    Scheinberg, Aaron L.; Soderlund, Krista M.; Elkins-Tanton, Linda T.

    2018-06-01

    The source of the ancient lunar magnetic field is an unsolved problem in the Moon's evolution. Theoretical work invoking a core dynamo has been unable to explain the magnitude of the observed field, falling instead one to two orders of magnitude below it. Since surface magnetic field strength is highly sensitive to the depth and size of the dynamo region, we instead hypothesize that the early lunar dynamo was driven by convection in a basal magma ocean formed from the final stages of an early lunar magma ocean; this material is expected to be dense, radioactive, and metalliferous. Here we use numerical convection models to predict the longevity and heat flow of such a basal magma ocean and use scaling laws to estimate the resulting magnetic field strength. We show that, if sufficiently electrically conducting, a magma ocean could have produced an early dynamo with surface fields consistent with the paleomagnetic observations.

  13. Statistics of equivalent width data and new oscillator strengths for Si II, Fe II, and Mn II. [in interstellar medium

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave

    1986-01-01

    Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.

  14. Wireless Concrete Strength Monitoring of Wind Turbine Foundations.

    PubMed

    Perry, Marcus; Fusiek, Grzegorz; Niewczas, Pawel; Rubert, Tim; McAlorum, Jack

    2017-12-16

    Wind turbine foundations are typically cast in place, leaving the concrete to mature under environmental conditions that vary in time and space. As a result, there is uncertainty around the concrete's initial performance, and this can encourage both costly over-design and inaccurate prognoses of structural health. Here, we demonstrate the field application of a dense, wireless thermocouple network to monitor the strength development of an onshore, reinforced-concrete wind turbine foundation. Up-to-date methods in fly ash concrete strength and maturity modelling are used to estimate the distribution and evolution of foundation strength over 29 days of curing. Strength estimates are verified by core samples, extracted from the foundation base. In addition, an artificial neural network, trained using temperature data, is exploited to demonstrate that distributed concrete strengths can be estimated for foundations using only sparse thermocouple data. Our techniques provide a practical alternative to computational models, and could assist site operators in making more informed decisions about foundation design, construction, operation and maintenance.

  15. Wireless Concrete Strength Monitoring of Wind Turbine Foundations

    PubMed Central

    Niewczas, Pawel; Rubert, Tim

    2017-01-01

    Wind turbine foundations are typically cast in place, leaving the concrete to mature under environmental conditions that vary in time and space. As a result, there is uncertainty around the concrete’s initial performance, and this can encourage both costly over-design and inaccurate prognoses of structural health. Here, we demonstrate the field application of a dense, wireless thermocouple network to monitor the strength development of an onshore, reinforced-concrete wind turbine foundation. Up-to-date methods in fly ash concrete strength and maturity modelling are used to estimate the distribution and evolution of foundation strength over 29 days of curing. Strength estimates are verified by core samples, extracted from the foundation base. In addition, an artificial neural network, trained using temperature data, is exploited to demonstrate that distributed concrete strengths can be estimated for foundations using only sparse thermocouple data. Our techniques provide a practical alternative to computational models, and could assist site operators in making more informed decisions about foundation design, construction, operation and maintenance. PMID:29258176

  16. DETERMINING PARTICLE EMISSION SOURCE STRENGTHS FOR COMMON RESIDENTIAL INDOOR SOURCES USING REAL-TIME MEASUREMENTS AND PIECEWISE-CONTINUOUS SOLUTIONS TO THE MASS BALANCE EQUATION

    EPA Science Inventory

    A variety of common activities in the home, such as smoking and cooking, generate indoor particle concentrations. Mathematical indoor air quality models permit predictions of indoor pollutant concentrations in homes, provided that parameter values such as source strengths and ...

  17. A mathematical model of extremely low frequency ocean induced electromagnetic noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dautta, Manik, E-mail: manik.dautta@anyeshan.com; Faruque, Rumana Binte, E-mail: rumana.faruque@anyeshan.com; Islam, Rakibul, E-mail: rakibul.islam@anyeshan.com

    2016-07-12

    Magnetic Anomaly Detection (MAD) system uses the principle that ferromagnetic objects disturb the magnetic lines of force of the earth. These lines of force are able to pass through both water and air in similar manners. A MAD system, usually mounted on an aerial vehicle, is thus often employed to confirm the detection and accomplish localization of large ferromagnetic objects submerged in a sea-water environment. However, the total magnetic signal encountered by a MAD system includes contributions from a myriad of low to Extremely Low Frequency (ELF) sources. The goal of the MAD system is to detect small anomaly signalsmore » in the midst of these low-frequency interfering signals. Both the Range of Detection (R{sub d}) and the Probability of Detection (P{sub d}) are limited by the ratio of anomaly signal strength to the interfering magnetic noise. In this paper, we report a generic mathematical model to estimate the signal-to-noise ratio or SNR. Since time-variant electro-magnetic signals are affected by conduction losses due to sea-water conductivity and the presence of air-water interface, we employ the general formulation of dipole induced electromagnetic field propagation in stratified media [1]. As a first step we employ a volumetric distribution of isolated elementary magnetic dipoles, each having its own dipole strength and orientation, to estimate the magnetic noise observed by a MAD system. Numerical results are presented for a few realizations out of an ensemble of possible realizations of elementary dipole source distributions.« less

  18. A multimodal assessment of balance in elderly and young adults.

    PubMed

    King, Gregory W; Abreu, Eduardo L; Cheng, An-Lin; Chertoff, Keyna K; Brotto, Leticia; Kelly, Patricia J; Brotto, Marco

    2016-03-22

    Falling is a significant health issue among elderly adults. Given the multifactorial nature of falls, effective balance and fall risk assessment must take into account factors from multiple sources. Here we investigate the relationship between fall risk and a diverse set of biochemical and biomechanical variables including: skeletal muscle-specific troponin T (sTnT), maximal strength measures derived from isometric grip and leg extension tasks, and postural sway captured from a force platform during a quiet stance task. These measures were performed in eight young and eleven elderly adults, along with estimates of fall risk derived from the Tinetti Balance Assessment. We observed age-related effects in all measurements, including a trend toward increased sTnT levels, increased postural sway, reduced upper and lower extremity strength, and reduced balance scores. We observed a negative correlation between balance scores and sTnT levels, suggesting its use as a biomarker for fall risk. We observed a significant positive correlation between balance scores and strength measures, adding support to the notion that muscle strength plays a significant role in postural control. We observed a significant negative correlation between balance scores and postural sway, suggesting that fall risk is associated with more loosely controlled center of mass regulation.

  19. A multimodal assessment of balance in elderly and young adults

    PubMed Central

    King, Gregory W.; Abreu, Eduardo L.; Cheng, An-Lin; Chertoff, Keyna K.; Brotto, Leticia; Kelly, Patricia J.; Brotto, Marco

    2016-01-01

    Falling is a significant health issue among elderly adults. Given the multifactorial nature of falls, effective balance and fall risk assessment must take into account factors from multiple sources. Here we investigate the relationship between fall risk and a diverse set of biochemical and biomechanical variables including: skeletal muscle-specific troponin T (sTnT), maximal strength measures derived from isometric grip and leg extension tasks, and postural sway captured from a force platform during a quiet stance task. These measures were performed in eight young and eleven elderly adults, along with estimates of fall risk derived from the Tinetti Balance Assessment. We observed age-related effects in all measurements, including a trend toward increased sTnT levels, increased postural sway, reduced upper and lower extremity strength, and reduced balance scores. We observed a negative correlation between balance scores and sTnT levels, suggesting its use as a biomarker for fall risk. We observed a significant positive correlation between balance scores and strength measures, adding support to the notion that muscle strength plays a significant role in postural control. We observed a significant negative correlation between balance scores and postural sway, suggesting that fall risk is associated with more loosely controlled center of mass regulation. PMID:26934319

  20. Do oceanic emissions account for the missing source of atmospheric carbonyl sulfide?

    NASA Astrophysics Data System (ADS)

    Lennartz, Sinikka; Marandino, Christa A.; von Hobe, Marc; Cortés, Pau; Simó, Rafel; Booge, Dennis; Quack, Birgit; Röttgers, Rüdiger; Ksionzek, Kerstin; Koch, Boris P.; Bracher, Astrid; Krüger, Kirstin

    2016-04-01

    Carbonyl sulfide (OCS) has a large potential to constrain terrestrial gross primary production (GPP), one of the largest carbon fluxes in the carbon cycle, as it is taken up by plants in a similar way as CO2. To estimate GPP in a global approach, the magnitude and seasonality of sources and sinks of atmospheric OCS have to be well understood, to distinguish between seasonal variation caused by vegetation uptake and other sources or sinks. However, the atmospheric budget is currently highly uncertain, and especially the oceanic source strength is debated. Recent studies suggest that a missing source of several hundreds of Gg sulfur per year is located in the tropical ocean by a top-down approach. Here, we present highly-resolved OCS measurements from two cruises to the tropical Pacific and Indian Ocean as a bottom-up approach. The results from these cruises show that opposite to the assumed ocean source, direct emissions of OCS from the tropical ocean are unlikely to account for the missing source. To reduce uncertainty in the global oceanic emission estimate, our understanding of the production and consumption processes of OCS and its precursors, dimethylsulfide (DMS) and carbon disulphide (CS2), needs improvement. Therefore, we investigate the influence of dissolved organic matter (DOM) on the photochemical production of OCS in seawater by considering analysis of the composition of DOM from the two cruises. Additionally, we discuss the potential of oceanic emissions of DMS and CS2 to closing the atmospheric OCS budget. Especially the production and consumption processes of CS2 in the surface ocean are not well known, thus we evaluate possible photochemical or biological sources by analyzing its covariation of biological and photochemical parameters.

  1. Statistical Evaluation of Biometric Evidence in Forensic Automatic Speaker Recognition

    NASA Astrophysics Data System (ADS)

    Drygajlo, Andrzej

    Forensic speaker recognition is the process of determining if a specific individual (suspected speaker) is the source of a questioned voice recording (trace). This paper aims at presenting forensic automatic speaker recognition (FASR) methods that provide a coherent way of quantifying and presenting recorded voice as biometric evidence. In such methods, the biometric evidence consists of the quantified degree of similarity between speaker-dependent features extracted from the trace and speaker-dependent features extracted from recorded speech of a suspect. The interpretation of recorded voice as evidence in the forensic context presents particular challenges, including within-speaker (within-source) variability and between-speakers (between-sources) variability. Consequently, FASR methods must provide a statistical evaluation which gives the court an indication of the strength of the evidence given the estimated within-source and between-sources variabilities. This paper reports on the first ENFSI evaluation campaign through a fake case, organized by the Netherlands Forensic Institute (NFI), as an example, where an automatic method using the Gaussian mixture models (GMMs) and the Bayesian interpretation (BI) framework were implemented for the forensic speaker recognition task.

  2. Declines in Strength and Mortality Risk Among Older Mexican Americans: Joint Modeling of Survival and Longitudinal Data.

    PubMed

    Peterson, Mark D; Zhang, Peng; Duchowny, Kate A; Markides, Kyriakos S; Ottenbacher, Kenneth J; Snih, Soham Al

    2016-12-01

    Grip strength is a noninvasive method of risk stratification; however, the association between changes in strength and mortality is unknown. The purposes of this study were to examine the association between grip strength and mortality among older Mexican Americans and to determine the ability of changes in strength to predict mortality. Longitudinal data were included from 3,050 participants in the Hispanic Established Population for the Epidemiological Study of the Elderly. Strength was assessed using a hand-held dynamometer and normalized to body mass. Conditional inference tree analyses were used to identify sex- and age-specific weakness thresholds, and the Kaplan-Meier estimator was used to determine survival estimates across various strata. We also evaluated survival with traditional Cox proportional hazard regression for baseline strength, as well as with joint modeling of survival and longitudinal strength change trajectories. Survival estimates were lower among women who were weak at baseline for only 65- to 74-year-olds (11.93 vs 16.69 years). Survival estimates were also lower among men who were weak at baseline for only ≥75-year-olds (5.80 vs 7.39 years). Lower strength at baseline (per 0.1 decrement) was significantly associated with mortality (hazard ratio [HR]: 1.10; 95% confidence interval [CI]: 1.01-1.19) for women only. There was a strong independent, longitudinal association between strength decline and early mortality, such that each 0.10 decrease in strength, within participants over time, resulted in a HR of 1.12 (95% CI: 1.00-1.25) for women and a HR of 1.15 (95% CI: 1.04-1.28) for men. Longitudinal declines in strength are significantly associated with all-cause mortality in older Mexican Americans. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Positron radiography of ignition-relevant ICF capsules

    DOE PAGES

    Williams, G. J.; Chen, Hui; Field, J. E.; ...

    2017-12-11

    Laser-generated positrons are evaluated as a probe source to radiograph in-flight ignition-relevant inertial confinement fusion capsules. Current ultraintense laser facilities are capable of producing 2 ×10 12 relativistic positrons in a narrow energy bandwidth and short time duration. Monte Carlo simulations suggest that the unique characteristics of such positrons allow for the reconstruction of both capsule shell radius and areal density between 0.002 and 2g/cm 2. The energy-downshifted positron spectrum and angular scattering of the source particles are sufficient to constrain the conditions of the capsule between preshot and stagnation. Here, we evaluate the effects of magnetic fields near themore » capsule surface using analytic estimates where it is shown that this diagnostic can tolerate line integrated field strengths of 100 T mm.« less

  4. Positron radiography of ignition-relevant ICF capsules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, G. J.; Chen, Hui; Field, J. E.

    Laser-generated positrons are evaluated as a probe source to radiograph in-flight ignition-relevant inertial confinement fusion capsules. Current ultraintense laser facilities are capable of producing 2 ×10 12 relativistic positrons in a narrow energy bandwidth and short time duration. Monte Carlo simulations suggest that the unique characteristics of such positrons allow for the reconstruction of both capsule shell radius and areal density between 0.002 and 2g/cm 2. The energy-downshifted positron spectrum and angular scattering of the source particles are sufficient to constrain the conditions of the capsule between preshot and stagnation. Here, we evaluate the effects of magnetic fields near themore » capsule surface using analytic estimates where it is shown that this diagnostic can tolerate line integrated field strengths of 100 T mm.« less

  5. Signature of inverse Compton emission from blazars

    NASA Astrophysics Data System (ADS)

    Gaur, Haritma; Mohan, Prashanth; Wierzcholska, Alicja; Gu, Minfeng

    2018-01-01

    Blazars are classified into high-, intermediate- and low-energy-peaked sources based on the location of their synchrotron peak. This lies in infra-red/optical to ultra-violet bands for low- and intermediate-peaked blazars. The transition from synchrotron to inverse Compton emission falls in the X-ray bands for such sources. We present the spectral and timing analysis of 14 low- and intermediate-energy-peaked blazars observed with XMM-Newton spanning 31 epochs. Parametric fits to X-ray spectra help constrain the possible location of transition from the high-energy end of the synchrotron to the low-energy end of the inverse Compton emission. In seven sources in our sample, we infer such a transition and constrain the break energy in the range 0.6-10 keV. The Lomb-Scargle periodogram is used to estimate the power spectral density (PSD) shape. It is well described by a power law in a majority of light curves, the index being flatter compared to general expectation from active galactic nuclei, ranging here between 0.01 and 1.12, possibly due to short observation durations resulting in an absence of long-term trends. A toy model involving synchrotron self-Compton and external Compton (EC; disc, broad line region, torus) mechanisms are used to estimate magnetic field strength ≤0.03-0.88 G in sources displaying the energy break and infer a prominent EC contribution. The time-scale for variability being shorter than synchrotron cooling implies steeper PSD slopes which are inferred in these sources.

  6. Measurement of Phased Array Point Spread Functions for Use with Beamforming

    NASA Technical Reports Server (NTRS)

    Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis

    2011-01-01

    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.

  7. Rectal Dose and Source Strength of the High-Dose-Rate Iridium-192 Both Affect Late Rectal Bleeding After Intracavitary Radiation Therapy for Uterine Cervical Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isohashi, Fumiaki, E-mail: isohashi@radonc.med.osaka-u.ac.j; Yoshioka, Yasuo; Koizumi, Masahiko

    2010-07-01

    Purpose: The purpose of this study was to reconfirm our previous findings that the rectal dose and source strength both affect late rectal bleeding after high-dose-rate intracavitary brachytherapy (HDR-ICBT), by using a rectal dose calculated in accordance with the definitions of the International Commission on Radiation Units and Measurements Report 38 (ICRU{sub RP}) or of dose-volume histogram (DVH) parameters by the Groupe Europeen de Curietherapie of the European Society for Therapeutic Radiology and Oncology. Methods and Materials: Sixty-two patients who underwent HDR-ICBT and were followed up for 1 year or more were studied. The rectal dose for ICBT was calculatedmore » by using the ICRP{sub RP} based on orthogonal radiographs or the DVH parameters based on computed tomography (CT). The total dose was calculated as the biologically equivalent dose expressed in 2-Gy fractions (EQD{sub 2}). The relationship between averaged source strength or the EQD{sub 2} and late rectal bleeding was then analyzed. Results: When patients were divided into four groups according to rectal EQD{sub 2} ({>=} or =} or <2.4 cGy.m{sup 2}.h{sup -1}), the group with both a high EQD{sub 2} and a high source strength showed a significantly greater probability of rectal bleeding for ICRU{sub RP}, D{sub 2cc}, and D{sub 1cc}. The patients with a median rectal dose above the threshold level did not show a greater frequency of rectal bleeding unless the source strength exceeded 2.4 cGy.m{sup 2}.h{sup -1}. Conclusions: Our results obtained with data based on ICRU{sub RP} and CT-based DVH parameters indicate that rectal dose and source strength both affect rectal bleeding after HDR-ICBT.« less

  8. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  9. Design space construction of multiple dose-strength tablets utilizing bayesian estimation based on one set of design-of-experiments.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-01-01

    Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.

  10. Estimation of rail wear limits based on rail strength investigations

    DOT National Transportation Integrated Search

    1998-12-01

    This report describes analyses performed to estimate limits on rail wear based on strength investigations. Two different failure modes are considered in this report: (1) permanent plastic bending, and (2) rail fracture. Rail bending stresses are calc...

  11. Production of NOx by Lightning and its Effects on Atmospheric Chemistry

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.

    2009-01-01

    Production of NO(x) by lightning remains the NO(x) source with the greatest uncertainty. Current estimates of the global source strength range over a factor of four (from 2 to 8 TgN/year). Ongoing efforts to reduce this uncertainty through field programs, cloud-resolved modeling, global modeling, and satellite data analysis will be described in this seminar. Representation of the lightning source in global or regional chemical transport models requires three types of information: the distribution of lightning flashes as a function of time and space, the production of NO(x) per flash, and the effective vertical distribution of the lightning-injected NO(x). Methods of specifying these items in a model will be discussed. For example, the current method of specifying flash rates in NASA's Global Modeling Initiative (GMI) chemical transport model will be discussed, as well as work underway in developing algorithms for use in the regional models CMAQ and WRF-Chem. A number of methods have been employed to estimate either production per lightning flash or the production per unit flash length. Such estimates derived from cloud-resolved chemistry simulations and from satellite NO2 retrievals will be presented as well as the methodologies employed. Cloud-resolved model output has also been used in developing vertical profiles of lightning NO(x) for use in global models. Effects of lightning NO(x) on O3 and HO(x) distributions will be illustrated regionally and globally.

  12. Combination of Complex-Based and Magnitude-Based Multiecho Water-Fat Separation for Accurate Quantification of Fat-Fraction

    PubMed Central

    Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724

  13. Application of an Optimal Search Strategy for the DNAPL Source Identification to a Field Site in Nanjing, China

    NASA Astrophysics Data System (ADS)

    Longting, M.; Ye, S.; Wu, J.

    2014-12-01

    Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.

  14. Detecting black bear source-sink dynamics using individual-based genetic graphs.

    PubMed

    Draheim, Hope M; Moore, Jennifer A; Etter, Dwayne; Winterstein, Scott R; Scribner, Kim T

    2016-07-27

    Source-sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source-sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source-sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source-sink dynamics and their implications on harvest management of game species. © 2016 The Author(s).

  15. Sparse targets in hydroacoustic surveys: Balancing quantity and quality of in situ target strength data

    USGS Publications Warehouse

    DuFour, Mark R.; Mayer, Christine M.; Kocovsky, Patrick; Qian, Song; Warner, David M.; Kraus, Richard T.; Vandergoot, Christopher

    2017-01-01

    Hydroacoustic sampling of low-density fish in shallow water can lead to low sample sizes of naturally variable target strength (TS) estimates, resulting in both sparse and variable data. Increasing maximum beam compensation (BC) beyond conventional values (i.e., 3 dB beam width) can recover more targets during data analysis; however, data quality decreases near the acoustic beam edges. We identified the optimal balance between data quantity and quality with increasing BC using a standard sphere calibration, and we quantified the effect of BC on fish track variability, size structure, and density estimates of Lake Erie walleye (Sander vitreus). Standard sphere mean TS estimates were consistent with theoretical values (−39.6 dB) up to 18-dB BC, while estimates decreased at greater BC values. Natural sources (i.e., residual and mean TS) dominated total fish track variation, while contributions from measurement related error (i.e., number of single echo detections (SEDs) and BC) were proportionally low. Increasing BC led to more fish encounters and SEDs per fish, while stability in size structure and density were observed at intermediate values (e.g., 18 dB). Detection of medium to large fish (i.e., age-2+ walleye) benefited most from increasing BC, as proportional changes in size structure and density were greatest in these size categories. Therefore, when TS data are sparse and variable, increasing BC to an optimal value (here 18 dB) will maximize the TS data quantity while limiting lower-quality data near the beam edges.

  16. The characteristic of the earthquake damage in Kyoto during the historical period

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2017-04-01

    The Kyoto city is located in the northern part of the Kyoto basin, central Japan and has a history of more than 1200 years. Kyoto has long been populated area with many buildings, and the center of politics, economics and culture of Japan. Due to historical large earthquakes, the Kyoto city was severely damaged such as collapses of buildings and human casualties. In the historical period, the Kyoto city has experienced six damaging large earthquake of 976, 1185, 1449, 1596, 1662 and 1830. Among them, Kyoto has experienced three damaging large earthquakes from the end of the 16th century to the middle of the 19th century, when the urban area was being expanded. All of these earthquakes are considered to be not the earthquakes in the Kyoto basin but inland earthquakes occurred in the surrounding area. The earthquake damage in Kyoto during the historical period is strongly controlled by ground conditions and earthquakes resistance of buildings rather than distance from the estimated source fault. To better estimate seismic intensity based on building damage, it is necessary to consider the state of buildings (e.g., elapsed years since established, histories of repairs and/or reinforcements, building structures) as well as the strength of ground shakings. By considering the strength of buildings at the time of an earthquake occurrence, the seismic intensity distribution due to historical large earthquakes can be estimated with higher reliability than before. The estimated seismic intensity distribution map for such historical earthquakes can be utilized for developing the strong ground motion prediction in the Kyoto basin.

  17. A dosimetric uncertainty analysis for photon-emitting brachytherapy sources: Report of AAPM Task Group No. 138 and GEC-ESTRO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeWerd, Larry A.; Ibbott, Geoffrey S.; Meigooni, Ali S.

    2011-02-15

    This report addresses uncertainties pertaining to brachytherapy single-source dosimetry preceding clinical use. The International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) and the National Institute of Standards and Technology (NIST) Technical Note 1297 are taken as reference standards for uncertainty formalism. Uncertainties in using detectors to measure or utilizing Monte Carlo methods to estimate brachytherapy dose distributions are provided with discussion of the components intrinsic to the overall dosimetric assessment. Uncertainties provided are based on published observations and cited when available. The uncertainty propagation from the primary calibration standard through transfer to the clinicmore » for air-kerma strength is covered first. Uncertainties in each of the brachytherapy dosimetry parameters of the TG-43 formalism are then explored, ending with transfer to the clinic and recommended approaches. Dosimetric uncertainties during treatment delivery are considered briefly but are not included in the detailed analysis. For low- and high-energy brachytherapy sources of low dose rate and high dose rate, a combined dosimetric uncertainty <5% (k=1) is estimated, which is consistent with prior literature estimates. Recommendations are provided for clinical medical physicists, dosimetry investigators, and source and treatment planning system manufacturers. These recommendations include the use of the GUM and NIST reports, a requirement of constancy of manufacturer source design, dosimetry investigator guidelines, provision of the lowest uncertainty for patient treatment dosimetry, and the establishment of an action level based on dosimetric uncertainty. These recommendations reflect the guidance of the American Association of Physicists in Medicine (AAPM) and the Groupe Europeen de Curietherapie-European Society for Therapeutic Radiology and Oncology (GEC-ESTRO) for their members and may also be used as guidance to manufacturers and regulatory agencies in developing good manufacturing practices for sources used in routine clinical treatments.« less

  18. A dosimetric uncertainty analysis for photon-emitting brachytherapy sources: Report of AAPM Task Group No. 138 and GEC-ESTRO

    PubMed Central

    DeWerd, Larry A.; Ibbott, Geoffrey S.; Meigooni, Ali S.; Mitch, Michael G.; Rivard, Mark J.; Stump, Kurt E.; Thomadsen, Bruce R.; Venselaar, Jack L. M.

    2011-01-01

    This report addresses uncertainties pertaining to brachytherapy single-source dosimetry preceding clinical use. The International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) and the National Institute of Standards and Technology (NIST) Technical Note 1297 are taken as reference standards for uncertainty formalism. Uncertainties in using detectors to measure or utilizing Monte Carlo methods to estimate brachytherapy dose distributions are provided with discussion of the components intrinsic to the overall dosimetric assessment. Uncertainties provided are based on published observations and cited when available. The uncertainty propagation from the primary calibration standard through transfer to the clinic for air-kerma strength is covered first. Uncertainties in each of the brachytherapy dosimetry parameters of the TG-43 formalism are then explored, ending with transfer to the clinic and recommended approaches. Dosimetric uncertainties during treatment delivery are considered briefly but are not included in the detailed analysis. For low- and high-energy brachytherapy sources of low dose rate and high dose rate, a combined dosimetric uncertainty <5% (k=1) is estimated, which is consistent with prior literature estimates. Recommendations are provided for clinical medical physicists, dosimetry investigators, and source and treatment planning system manufacturers. These recommendations include the use of the GUM and NIST reports, a requirement of constancy of manufacturer source design, dosimetry investigator guidelines, provision of the lowest uncertainty for patient treatment dosimetry, and the establishment of an action level based on dosimetric uncertainty. These recommendations reflect the guidance of the American Association of Physicists in Medicine (AAPM) and the Groupe Européen de Curiethérapie–European Society for Therapeutic Radiology and Oncology (GEC-ESTRO) for their members and may also be used as guidance to manufacturers and regulatory agencies in developing good manufacturing practices for sources used in routine clinical treatments. PMID:21452716

  19. A dosimetric uncertainty analysis for photon-emitting brachytherapy sources: report of AAPM Task Group No. 138 and GEC-ESTRO.

    PubMed

    DeWerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Mitch, Michael G; Rivard, Mark J; Stump, Kurt E; Thomadsen, Bruce R; Venselaar, Jack L M

    2011-02-01

    This report addresses uncertainties pertaining to brachytherapy single-source dosimetry preceding clinical use. The International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) and the National Institute of Standards and Technology (NIST) Technical Note 1297 are taken as reference standards for uncertainty formalism. Uncertainties in using detectors to measure or utilizing Monte Carlo methods to estimate brachytherapy dose distributions are provided with discussion of the components intrinsic to the overall dosimetric assessment. Uncertainties provided are based on published observations and cited when available. The uncertainty propagation from the primary calibration standard through transfer to the clinic for air-kerma strength is covered first. Uncertainties in each of the brachytherapy dosimetry parameters of the TG-43 formalism are then explored, ending with transfer to the clinic and recommended approaches. Dosimetric uncertainties during treatment delivery are considered briefly but are not included in the detailed analysis. For low- and high-energy brachytherapy sources of low dose rate and high dose rate, a combined dosimetric uncertainty <5% (k=1) is estimated, which is consistent with prior literature estimates. Recommendations are provided for clinical medical physicists, dosimetry investigators, and source and treatment planning system manufacturers. These recommendations include the use of the GUM and NIST reports, a requirement of constancy of manufacturer source design, dosimetry investigator guidelines, provision of the lowest uncertainty for patient treatment dosimetry, and the establishment of an action level based on dosimetric uncertainty. These recommendations reflect the guidance of the American Association of Physicists in Medicine (AAPM) and the Groupe Européen de Curiethérapie-European Society for Therapeutic Radiology and Oncology (GEC-ESTRO) for their members and may also be used as guidance to manufacturers and regulatory agencies in developing good manufacturing practices for sources used in routine clinical treatments.

  20. Estimation of Ksub Ic from slow bend precracked Charpy specimen strength ratios

    NASA Technical Reports Server (NTRS)

    Succop, G.; Brown, W. F., Jr.

    1976-01-01

    Strength ratios are reported which were derived from slow bend tests on 0.25 inch thick precracked Charpy specimens of steels, aluminum alloys, and a titanium alloy for which valid K sub Ic values were established. The strength ratios were used to develop calibration curves typical of those that could be useful in estimating K sub Ic for the purposes of alloy development of quality control.

  1. Speech segregation based-on binaural cue: interaural time difference (itd) and interaural level difference (ild)

    NASA Astrophysics Data System (ADS)

    Nur Farid, Mifta; Arifianto, Dhany

    2016-11-01

    A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.

  2. Assessing and reporting uncertainties in dietary exposure analysis: Mapping of uncertainties in a tiered approach.

    PubMed

    Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David

    2015-08-01

    Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Husbandry Emissions at the Sub-Facility Scale by Fused Mobile Surface In Situ and Airborne Remote Sensing

    NASA Astrophysics Data System (ADS)

    Leifer, I.; Melton, C.; Tratt, D. M.; Hall, J. L.; Buckland, K. N.; Frash, J.; Leen, J. B.; Lundquist, T.; Vigil, S. A.

    2017-12-01

    Husbandry methane (CH4) and ammonia (NH3) are strong climate and air pollution drivers. Husbandry emission factors have significant uncertainty and can differ from lab estimates as real-world practices affect emissions including where and how husbandry activities occur, their spatial and temporal relationship to micro-climate (winds, temperature, insolation, rain, and lagoon levels, which vary diurnally and seasonally), and animal care. Research dairies provide a unique opportunity to combine insights on sub-facility scale emissions to identify best practices. Two approaches with significant promise for quantifying husbandry emissions are airborne remote sensing and mobile in situ trace gas with meteorological measurements. Both capture snapshot data to allow deconvolution of temporal and spatial variability, which challenges stationary measurements, while also capturing micro-scale processes, allowing connection of real-world practices to emissions. Mobile in situ concentration data on trace gases and meteorology were collected by AMOG (AutoMObile trace Gas) Surveyor on 10 days spanning 31 months at the California Polytechnic State University Research Dairy, San Luis Obispo, CA. AMOG Surveyor is a commuter vehicle modified for atmospheric science. CH4, NH3, H2O, COS, CO, CO2, H2S, O3, NO, NO2, SO2, NOX, solar spectra, temperature, and winds were measured. The airborne hyperspectral thermal infrared sensor, Mako, collected data on 28 Sept. 2015. Research dairies allow combining insights on sub-facility scale emissions to identify best practices holistically - i.e., considering multiple trace gases. In situ data were collected while transecting plumes, approximately orthogonal to winds. Emission strength and source location were estimated by Gaussian plume inversion, validated by airborne data. Good agreement was found on source strength and location at meter length-scales. Data revealed different activities produced unique emissions with distinct trace gas fingerprints - for example, a mostly empty holding lagoon (LE, Fig. 1) was a stronger H2S source than a full holding lagoon (LW, Fig. 1), and an area in a corral (S1, Fig. 1) where cows congregated was a strong, focused NH3 source. Mako data mapped out micro-scale variability in transport that agreed with AMOG winds and plume inversions.

  4. Estimation of 1RM for knee extension based on the maximal isometric muscle strength and body composition.

    PubMed

    Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo

    2017-11-01

    [Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.

  5. Beam current enhancement of microwave plasma ion source utilizing double-port rectangular cavity resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab

    2012-02-15

    Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profilemore » of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.« less

  6. Beam current enhancement of microwave plasma ion source utilizing double-port rectangular cavity resonator.

    PubMed

    Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab; Yang, J J; Hwang, Y S

    2012-02-01

    Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profile of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.

  7. Motor unit number estimates correlate with strength in polio survivors.

    PubMed

    Sorenson, Eric J; Daube, Jasper R; Windebank, Anthony J

    2006-11-01

    Motor unit number estimation (MUNE) has been proposed as an outcome measure in clinical trials for the motor neuron diseases. One major criticism of MUNE is that it may not represent a clinically meaningful endpoint. We prospectively studied a cohort of polio survivors over a period of 15 years with respect to MUNE and strength. We identified a significant association between thenar MUNE and arm strength, extensor digitorum brevis MUNE and leg strength, and the summated MUNE and global strength of the polio survivors. These findings confirm the clinical relevance of MUNE as an outcome measure in the motor neuron diseases and provide further validation for its use in clinical trial research.

  8. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  9. Shear Behavior Models of Steel Fiber Reinforced Concrete Beams Modifying Softened Truss Model Approaches.

    PubMed

    Hwang, Jin-Ha; Lee, Deuck Hang; Ju, Hyunjin; Kim, Kang Su; Seo, Soo-Yeon; Kang, Joo-Won

    2013-10-23

    Recognizing that steel fibers can supplement the brittle tensile characteristics of concrete, many studies have been conducted on the shear performance of steel fiber reinforced concrete (SFRC) members. However, previous studies were mostly focused on the shear strength and proposed empirical shear strength equations based on their experimental results. Thus, this study attempts to estimate the strains and stresses in steel fibers by considering the detailed characteristics of steel fibers in SFRC members, from which more accurate estimation on the shear behavior and strength of SFRC members is possible, and the failure mode of steel fibers can be also identified. Four shear behavior models for SFRC members have been proposed, which have been modified from the softened truss models for reinforced concrete members, and they can estimate the contribution of steel fibers to the total shear strength of the SFRC member. The performances of all the models proposed in this study were also evaluated by a large number of test results. The contribution of steel fibers to the shear strength varied from 5% to 50% according to their amount, and the most optimized volume fraction of steel fibers was estimated as 1%-1.5%, in terms of shear performance.

  10. Application of the Zero-Order Reaction Rate Model and Transition State Theory to predict porous Ti6Al4V bending strength.

    PubMed

    Reig, L; Amigó, V; Busquets, D; Calero, J A; Ortiz, J L

    2012-08-01

    Porous Ti6Al4V samples were produced by microsphere sintering. The Zero-Order Reaction Rate Model and Transition State Theory were used to model the sintering process and to estimate the bending strength of the porous samples developed. The evolution of the surface area during the sintering process was used to obtain sintering parameters (sintering constant, activation energy, frequency factor, constant of activation and Gibbs energy of activation). These were then correlated with the bending strength in order to obtain a simple model with which to estimate the evolution of the bending strength of the samples when the sintering temperature and time are modified: σY=P+B·[lnT·t-ΔGa/R·T]. Although the sintering parameters were obtained only for the microsphere sizes analysed here, the strength of intermediate sizes could easily be estimated following this model. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Terrace width variations in complex Mercurian craters and the transient strength of cratered Mercurian and lunar crust

    NASA Technical Reports Server (NTRS)

    Leith, Andrew C.; Mckinnon, William B.

    1991-01-01

    The effective cohesion of the cratered region during crater collapse is determined via the widths of slump terraces of complex craters. Terrace widths are measured for complex craters on Mercury; these generally increase outward toward the rim for a given crater, and the width of the outermost major terrace is generally an increasing function of crater diameter. The terrace widths on Mercury and a gravity-driven slump model are used to estimate the strength of the cratered region immediately after impact (about 1-2 MPa). A comparison with the previous study of lunar complex craters by Pearce and Melosh (1986) indicates that the transient strength of cratered Mercurian crust is no greater than that of the moon. The strength estimates vary only slightly with the geometric model used to restore the outermost major terrace to its precollapse configuration and are consistent with independent strength estimates from the simple-to-complex crater depth/diameter transition.

  12. Influences of cement source and sample of cement source on compressive strength variability of gravel aggregate concrete.

    DOT National Transportation Integrated Search

    2013-06-01

    The strength of concrete is influenced by each constituent material used in the concrete : mixture and the proportions of each ingredient. Water-cementitious ratio, cementitious materials, air : content, chemical admixtures, and type of coarse aggreg...

  13. Is the northern high latitude land-based CO2 sink weakening?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcguire, David; Kicklighter, David W.; Gurney, Kevin R

    2011-01-01

    Studies indicate that, historically, terrestrial ecosystems of the northern high latitude region may have been responsible for up to 60% of the global net land-based sink for atmospheric CO2. However, these regions have recently experienced remarkable modification of the major driving forces of the carbon cycle, including surface air temperature warming that is significantly greater than the global average and associated increases in the frequency and severity of disturbances. Whether arctic tundra and boreal forest ecosystems will continue to sequester atmospheric CO2 in the face of these dramatic changes is unknown. Here we show the results of model simulations thatmore » estimate a 41 Tg C yr-1 sink in the boreal land regions from 1997 to 2006, which represents a 73% reduction in the strength of the sink estimated for previous decades in the late 20th Century. Our results suggest that CO2 uptake by the region in previous decades may not be as strong as previously estimated. The recent decline in sink strength is the combined result of 1) weakening sinks due to warming-induced increases in soil organic matter decomposition and 2) strengthening sources from pyrogenic CO2 emissions as a result of the substantial area of boreal forest burned in wildfires across the region in recent years. Such changes create positive feedbacks to the climate system that accelerate global warming, putting further pressure on emission reductions to achieve atmospheric stabilization targets.« less

  14. Is the northern high-latitude land-based CO2 sink weakening?

    USGS Publications Warehouse

    Hayes, D.J.; McGuire, A.D.; Kicklighter, D.W.; Gurney, K.R.; Burnside, T.J.; Melillo, J.M.

    2011-01-01

    Studies indicate that, historically, terrestrial ecosystems of the northern high-latitude region may have been responsible for up to 60% of the global net land-based sink for atmospheric CO2. However, these regions have recently experienced remarkable modification of the major driving forces of the carbon cycle, including surface air temperature warming that is significantly greater than the global average and associated increases in the frequency and severity of disturbances. Whether Arctic tundra and boreal forest ecosystems will continue to sequester atmospheric CO2 in the face of these dramatic changes is unknown. Here we show the results of model simulations that estimate a 41 Tg C yr-1 sink in the boreal land regions from 1997 to 2006, which represents a 73% reduction in the strength of the sink estimated for previous decades in the late 20th century. Our results suggest that CO 2 uptake by the region in previous decades may not be as strong as previously estimated. The recent decline in sink strength is the combined result of (1) weakening sinks due to warming-induced increases in soil organic matter decomposition and (2) strengthening sources from pyrogenic CO2 emissions as a result of the substantial area of boreal forest burned in wildfires across the region in recent years. Such changes create positive feedbacks to the climate system that accelerate global warming, putting further pressure on emission reductions to achieve atmospheric stabilization targets. Copyright 2011 by the American Geophysical Union.

  15. An approach for estimating the magnetization direction of magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Zhang, Yingtang; Yin, Gang; Fan, Hongbo; Li, Zhining

    2017-02-01

    An approach for estimating the magnetization direction of magnetic anomalies in the presence of remanent magnetization through correlation between normalized source strength (NSS) and reduced-to-the-pole (RTP) is proposed. The observation region was divided into several calculation areas and the RTP field was transformed using different assumed values of the magnetization directions. Following this, the cross-correlation between NSS and RTP field was calculated, and it was found that the correct magnetization direction was that corresponding to the maximum cross-correlation value. The approach was tested on both simulated and real magnetic data. The results showed that the approach was effective in a variety of situations and considerably reduced the effect of remanent magnetization. Thus, the method using NSS and RTP is more effective compared to other methods such as using the total magnitude anomaly and RTP.

  16. Reference Values of Grip Strength, Prevalence of Low Grip Strength, and Factors Affecting Grip Strength Values in Chinese Adults.

    PubMed

    Yu, Ruby; Ong, Sherlin; Cheung, Osbert; Leung, Jason; Woo, Jean

    2017-06-01

    The objectives of this study were to update the reference values of grip strength, to estimate the prevalence of low grip strength, and to examine the impact of different aspects of measurement protocol on grip strength values in Chinese adults. A cross-sectional survey of Chinese men (n = 714) and women (n = 4014) aged 18-102 years was undertaken in different community settings in Hong Kong. Grip strength was measured with a digital dynamometer (TKK 5401 Grip-D; Takei, Niigata, Japan). Low grip strength was defined as grip strength 2 standard deviations or more below the mean for young adults. The effects of measurement protocol on grip strength values were examined in a subsample of 45 men and women with repeated measures of grip strength taken with a hydraulic dynamometer (Baseline; Fabrication Enterprises Inc, Irvington, NY), using pair t-tests, intraclass correlation coefficient, and Bland and Altman plots. Grip strength was greater among men than among women (P < .001) and the rate of decline differed between sexes (P < .001). The prevalence of low grip strength also increased with age, reaching a rate of 16.5% in men and 20.6% in women aged 65+. Although the TKK digital dynamometer gave higher grip strength values than the Baseline hydraulic dynamometer (P < .001), the degree of agreement between the 2 dynamometers was satisfactory. Higher grip strength values were also observed when the measurement was performed with the elbow extended in a standing position, compared with that with the elbow flexed at 90° in a sitting position, using the same dynamometer (P < .05). This study updated the reference values of grip strength and estimated the prevalence of low grip strength among Chinese adults spanning a wide age range. These findings might be useful for risk estimation and evaluation of interventions. However, grip strength measurements should be interpreted with caution, as grip strength values can be affected by type of dynamometer used, assessment posture, and elbow position. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  17. A theory of photometric stereo for a class of diffuse non-Lambertian surfaces

    NASA Technical Reports Server (NTRS)

    Tagare, Hemant D.; Defigueiredo, Rui J. P.

    1991-01-01

    A theory of photometric stereo is proposed for a large class of non-Lambertian reflectance maps. The authors review the different reflectance maps proposed in the literature for modeling reflection from real-world surfaces. From this, they obtain a mathematical class of reflectance maps to which the maps belong. They show that three lights can be sufficient for a unique inversion of the photometric stereo equation for the entire class of reflectance maps. They also obtain a constraint on the positions of light sources for obtaining this solution. They investigate the sufficiency of three light sources to estimate the surface normal and the illuminant strength. The issue of completeness of reconstruction is addressed. They shown that if k lights are sufficient for a unique inversion, 2k lights are necessary for a complete inversion.

  18. Combining Radiography and Passive Measurements for Radiological Threat Detection in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Abstract Radiography is widely understood to provide information complimentary to passive detection: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions which may mask a passive radiological signal. We present a method for combining radiographic and passive data which uses the radiograph to provide an estimate of scatter and attenuation for possible sources. This approach allows quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present first results for this method for a simple modeled test case of a cargo container drivingmore » through a PVT portal. With this inversion approach, we address criteria for an integrated passive and radiographic screening system and how detection of SNM threats might be improved in such a system.« less

  19. Early somatosensory processing in individuals at risk for developing psychoses.

    PubMed

    Hagenmuller, Florence; Heekeren, Karsten; Theodoridou, Anastasia; Walitza, Susanne; Haker, Helene; Rössler, Wulf; Kawohl, Wolfram

    2014-01-01

    Human cortical somatosensory evoked potentials (SEPs) allow an accurate investigation of thalamocortical and early cortical processing. SEPs reveal a burst of superimposed early (N20) high-frequency oscillations around 600 Hz. Previous studies reported alterations of SEPs in patients with schizophrenia. This study addresses the question whether those alterations are also observable in populations at risk for developing schizophrenia or bipolar disorders. To our knowledge to date, this is the first study investigating SEPs in a population at risk for developing psychoses. Median nerve SEPs were investigated using multichannel EEG in individuals at risk for developing bipolar disorders (n = 25), individuals with high-risk status (n = 59) and ultra-high-risk status for schizophrenia (n = 73) and a gender and age-matched control group (n = 45). Strengths and latencies of low- and high-frequency components as estimated by dipole source analysis were compared between groups. Low- and high-frequency source activity was reduced in both groups at risk for schizophrenia, in comparison to the group at risk for bipolar disorders. HFO amplitudes were also significant reduced in subjects with high-risk status for schizophrenia compared to healthy controls. These differences were accentuated among cannabis non-users. Reduced N20 source strengths were related to higher positive symptom load. These results suggest that the risk for schizophrenia, in contrast to bipolar disorders, may involve an impairment of early cerebral somatosensory processing. Neurophysiologic alterations in schizophrenia precede the onset of initial psychotic episode and may serve as indicator of vulnerability for developing schizophrenia.

  20. Early somatosensory processing in individuals at risk for developing psychoses

    PubMed Central

    Hagenmuller, Florence; Heekeren, Karsten; Theodoridou, Anastasia; Walitza, Susanne; Haker, Helene; Rössler, Wulf; Kawohl, Wolfram

    2014-01-01

    Human cortical somatosensory evoked potentials (SEPs) allow an accurate investigation of thalamocortical and early cortical processing. SEPs reveal a burst of superimposed early (N20) high-frequency oscillations around 600 Hz. Previous studies reported alterations of SEPs in patients with schizophrenia. This study addresses the question whether those alterations are also observable in populations at risk for developing schizophrenia or bipolar disorders. To our knowledge to date, this is the first study investigating SEPs in a population at risk for developing psychoses. Median nerve SEPs were investigated using multichannel EEG in individuals at risk for developing bipolar disorders (n = 25), individuals with high-risk status (n = 59) and ultra-high-risk status for schizophrenia (n = 73) and a gender and age-matched control group (n = 45). Strengths and latencies of low- and high-frequency components as estimated by dipole source analysis were compared between groups. Low- and high-frequency source activity was reduced in both groups at risk for schizophrenia, in comparison to the group at risk for bipolar disorders. HFO amplitudes were also significant reduced in subjects with high-risk status for schizophrenia compared to healthy controls. These differences were accentuated among cannabis non-users. Reduced N20 source strengths were related to higher positive symptom load. These results suggest that the risk for schizophrenia, in contrast to bipolar disorders, may involve an impairment of early cerebral somatosensory processing. Neurophysiologic alterations in schizophrenia precede the onset of initial psychotic episode and may serve as indicator of vulnerability for developing schizophrenia. PMID:25309363

  1. Identification and modification of dominant noise sources in diesel engines

    NASA Astrophysics Data System (ADS)

    Hayward, Michael D.

    Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.

  2. Who, What, When, Where? Determining the Health Implications of Wildfire Smoke Exposure

    NASA Astrophysics Data System (ADS)

    Ford, B.; Lassman, W.; Gan, R.; Burke, M.; Pfister, G.; Magzamen, S.; Fischer, E. V.; Volckens, J.; Pierce, J. R.

    2016-12-01

    Exposure to poor air quality is associated with negative impacts on human health. A large natural source of PM in the western U.S. is from wildland fires. Accurately attributing health endpoints to wildland-fire smoke requires a determination of the exposed population. This is a difficult endeavor because most current methods for monitoring air quality are not at high temporal and spatial resolutions. Therefore, there is a growing effort to include multiple datasets and create blended products of smoke exposure that can exploit the strengths of each dataset. In this work, we combine model (WRF-Chem) simulations, NASA satellite (MODIS) observations, and in-situ surface monitors to improve exposure estimates. We will also introduce a social-media dataset of self-reported smoke/haze/pollution to improve population-level exposure estimates for the summer of 2015. Finally, we use these detailed exposure estimates in different epidemiologic study designs to provide an in-depth understanding of the role wildfire exposure plays on health outcomes.

  3. Estimates of electricity requirements for the recovery of mineral commodities, with examples applied to sub-Saharan Africa

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2011-01-01

    To produce materials from mine to market it is necessary to overcome obstacles that include the force of gravity, the strength of molecular bonds, and technological inefficiencies. These challenges are met by the application of energy to accomplish the work that includes the direct use of electricity, fossil fuel, and manual labor. The tables and analyses presented in this study contain estimates of electricity consumption for the mining and processing of ores, concentrates, intermediate products, and industrial and refined metallic commodities on a kilowatt-hour per unit basis, primarily the metric ton or troy ounce. Data contained in tables pertaining to specific currently operating facilities are static, as the amount of electricity consumed to process or produce a unit of material changes over time for a great number of reasons. Estimates were developed from diverse sources that included feasibility studies, company-produced annual and sustainability reports, conference proceedings, discussions with government and industry experts, journal articles, reference texts, and studies by nongovernmental organizations.

  4. Locating sources within a dense sensor array using graph clustering

    NASA Astrophysics Data System (ADS)

    Gerstoft, P.; Riahi, N.

    2017-12-01

    We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.

  5. Seasonal variations in elemental carbon aerosol, carbon monoxide and sulfur dioxide: Implications for sources

    NASA Astrophysics Data System (ADS)

    Antony Chen, L.-W.; Doddridge, Bruce G.; Dickerson, Russell R.; Chow, Judith C.; Mueller, Peter K.; Quinn, John; Butler, William A.

    As part of Maryland Aerosol Research and CHaracterization (MARCH-Atlantic) study, measurements of 24-hr average elemental carbon (EC) aerosol concentration were made at Fort Meade, Maryland, USA, a suburban site within the Baltimore-Washington corridor during July 1999, October 1999, January 2000, April 2000 and July 2000. Carbon monoxide (CO) and sulfur dioxide (SO2) were also measured nearly continuously over the period. Tight correlation between EC and CO in every month suggests common or proximate sources, likely traffic emissions. The EC versus CO slope varies in different seasons and generally increases with ambient temperature. The temperature dependence of EC/CO ratios suggests that EC source strength peaks in summer. By using the well established emission inventory for CO, and EC/CO ratio found in this study, EC emission over North America is estimated at 0.31±0.12 Tg yr-1, on the low end but in reasonable agreement with prior inventories based on emission factors and fuel consumption.

  6. Seasonal variations in elemental carbon aerosol, carbon monoxide and sulfur dioxide: Implications for sources

    NASA Astrophysics Data System (ADS)

    Chen, L.-W. Antony; Doddridge, Bruce G.; Dickerson, Russell R.; Chow, Judith C.; Mueller, Peter K.; Quinn, John; Butler, William A.

    2001-05-01

    As part of Maryland Aerosol Research and CHaracterization (MARCH-Atlantic) study, measurements of 24-hr average elemental carbon (EC) aerosol concentration were made at Fort Meade, Maryland, USA, a suburban site within the Baltimore-Washington corridor during July 1999, October 1999, January 2000, April 2000 and July 2000. Carbon monoxide (CO) and sulfur dioxide (SO2) were also measured nearly continuously over the period. Tight correlation between EC and CO in every month suggests common or proximate sources, likely traffic emissions. The EC versus CO slope varies in different seasons and generally increases with ambient temperature. The temperature dependence of EC/CO ratios suggests that EC source strength peaks in summer. By using the well established emission inventory for CO, and EC/CO ratio found in this study, EC emission over North America is estimated at 0.31+/-0.12Tgyr-1, on the low end but in reasonable agreement with prior inventories based on emission factors and fuel consumption.

  7. Central Compact Objects: some of them could be spinning up?

    NASA Astrophysics Data System (ADS)

    Benli, O.; Ertan, Ü.

    2018-05-01

    Among confirmed central compact objects (CCOs), only three sources have measured period and period derivatives. We have investigated possible evolutionary paths of these three CCOs in the fallback disc model. The model can account for the individual X-ray luminosities and rotational properties of the sources consistently with their estimated supernova ages. For these sources, reasonable model curves can be obtained with dipole field strengths ˜ a few × 109 G on the surface of the star. The model curves indicate that these CCOs were in the spin-up state in the early phase of evolution. The spin-down starts, while accretion is going on, at a time t ˜ 103 - 104 yr depending on the current accretion rate, period and the magnetic dipole moment of the star. This implies that some of the CCOs with relatively long periods, weak dipole fields and high X-ray luminosities could be strong candidates to show spin-up behavior if they indeed evolve with fallback discs.

  8. Analysis and correction of linear optics errors, and operational improvements in the Indus-2 storage ring

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2017-08-01

    Estimation and correction of the optics errors in an operational storage ring is always vital to achieve the design performance. To achieve this task, the most suitable and widely used technique, called linear optics from closed orbit (LOCO) is used in almost all storage ring based synchrotron radiation sources. In this technique, based on the response matrix fit, errors in the quadrupole strengths, beam position monitor (BPM) gains, orbit corrector calibration factors etc. can be obtained. For correction of the optics, suitable changes in the quadrupole strengths can be applied through the driving currents of the quadrupole power supplies to achieve the desired optics. The LOCO code has been used at the Indus-2 storage ring for the first time. The estimation of linear beam optics errors and their correction to minimize the distortion of linear beam dynamical parameters by using the installed number of quadrupole power supplies is discussed. After the optics correction, the performance of the storage ring is improved in terms of better beam injection/accumulation, reduced beam loss during energy ramping, and improvement in beam lifetime. It is also useful in controlling the leakage in the orbit bump required for machine studies or for commissioning of new beamlines.

  9. Aeroacoustic model of a modulation fan with pitching blades as a sound generator.

    PubMed

    Du, Lin; Jing, Xiaodong; Sun, Xiaofeng; Song, Weihua

    2014-10-01

    This paper is to develop an aeroacoustic model for a type of modulation fan termed as rotary subwoofer that is capable of radiating low-frequency sound at high sound pressure levels. The rotary subwoofer is modeled as a baffled monopole whose source strength is specified by the fluctuating mass flow rate produced by the pitching blades that rotate at constant speed. An immersed boundary method is established to simulate the detailed unsteady flow around the blades and also to estimate the source strength for the prediction of the far-field sound pressure level (SPL). The numerical simulation shows that the rotary subwoofer can output oscillating air flow that is in phase with the pitching motion of the blades. It is found that flow separation is more likely to occur on the pitching blades at higher modulation frequency, resulting in the reduction of the radiated SPL. Increasing the maximum blade excursion is one of the most effective means to enhance the sound radiation, but this effect can also be compromised by the flow separation. As the modulation frequency increases, correspondingly increasing the rotational speed or using larger blade solidity is beneficial to suppressing the flow separation and thus improving the acoustic performance of the rotary subwoofer.

  10. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  11. Laboratory study of PCB transport from primary sources to settled dust.

    PubMed

    Liu, Xiaoyu; Guo, Zhishi; Krebs, Kenneth A; Greenwell, Dale J; Roache, Nancy F; Stinson, Rayford A; Nardin, Joshua A; Pope, Robert H

    2016-04-01

    Dust is an important sink for indoor air pollutants, such as polychlorinated biphenyls (PCBs) that were used in building materials and products. In this study, two types of dust, house dust and Arizona Test Dust, were tested in a 30-m(3) stainless steel chamber with two types of panels. The PCB-containing panels were aluminum sheets coated with a PCB-spiked primer or caulk. The PCB-free panels were coated with the same materials but without PCBs. The dust evenly spread on each panel was collected at different times to determine its PCB content. The data from the PCB panels were used to evaluate the PCB migration from the source to the dust through direct contact, and the data from the PCB-free panels were used to evaluate the sorption of PCBs through the dust/air partition. Settled dust can adsorb PCBs from air. The sorption concentration was dependent on the congener concentration in the air and favored less volatile congeners. When the house dust was in direct contact with the PCB-containing panel, PCBs migrated into the dust at a much faster rate than the PCB transfer rate due to the dust/air partition. The dust/source partition was not significantly affected by the congener's volatility. For a given congener, the ratio between its concentration in the dust and in the source was used to estimate the dust/source partition coefficient. The estimated values ranged from 0.04 to 0.16. These values are indicative of the sink strength of the tested house dust being in the middle or lower-middle range. Published by Elsevier Ltd.

  12. Emission ratio and isotopic signatures of molecular hydrogen emissions from tropical biomass burning

    NASA Astrophysics Data System (ADS)

    Haumann, F. A.; Batenburg, A. M.; Pieterse, G.; Gerbig, C.; Krol, M. C.; Röckmann, T.

    2013-09-01

    In this study, we identify a biomass-burning signal in molecular hydrogen (H2) over the Amazonian tropical rainforest. To quantify this signal, we measure the mixing ratios of H2 and several other species as well as the H2 isotopic composition in air samples that were collected in the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) aircraft campaign during the dry season. We derive a relative H2 emission ratio with respect to carbon monoxide (CO) of 0.31 ± 0.04 ppb ppb-1 and an isotopic source signature of -280 ± 41‰ in the air masses influenced by tropical biomass burning. In order to retrieve a clear source signal that is not influenced by the soil uptake of H2, we exclude samples from the atmospheric boundary layer. This procedure is supported by data from a global chemistry transport model. The ΔH2 / ΔCO emission ratio is significantly lower than some earlier estimates for the tropical rainforest. In addition, our results confirm the lower values of the previously conflicting estimates of the H2 isotopic source signature from biomass burning. These values for the emission ratio and isotopic source signatures of H2 from tropical biomass burning can be used in future bottom-up and top-down approaches aiming to constrain the strength of the biomass-burning source for H2. Hitherto, these two quantities relied only on combustion experiments or on statistical relations, since no direct signal had been obtained from in-situ observations.

  13. Emission ratio and isotopic signatures of molecular hydrogen emissions from tropical biomass burning

    NASA Astrophysics Data System (ADS)

    Haumann, F. A.; Batenburg, A. M.; Pieterse, G.; Gerbig, C.; Krol, M. C.; Röckmann, T.

    2013-04-01

    In this study, we identify a biomass-burning signal in molecular hydrogen (H2) over the Amazonian tropical rainforest. To quantify this signal, we measure the mixing ratios of H2 and several other species as well as the H2 isotopic composition in air samples that were collected in the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) aircraft campaign during the dry season. We derive a relative H2 emission ratio with respect to carbon monoxide (CO) of 0.31 ± 0.04 ppb/ppb and an isotopic source signature of -280 ± 41‰ in the air masses influenced by tropical biomass burning. In order to retrieve a clear source signal that is not influenced by the soil uptake of H2, we exclude samples from the atmospheric boundary layer. This procedure is supported by data from a global chemistry transport model. The ΔH2/ΔCO emission ratio is significantly lower than some earlier estimates for the tropical rainforest. In addition, our results confirm the lower values of the previously conflicting estimates of the H2 isotopic source signature from biomass burning. These values for the emission ratio and isotopic source signatures of H2 from tropical biomass burning can be used in future bottom-up and top-down approaches aiming to constrain the strength of the biomass-burning source for H2. Hitherto, these two quantities relied only on combustion experiments or on statistical relations, since no direct signal had been obtained from in-situ observations.

  14. Models for estimating and projecting global, regional and national prevalence and disease burden of asthma: protocol for a systematic review.

    PubMed

    Bhuia, Mohammad Romel; Nwaru, Bright I; Weir, Christopher J; Sheikh, Aziz

    2017-05-17

    Models that have so far been used to estimate and project the prevalence and disease burden of asthma are in most cases inadequately described and irreproducible. We aim systematically to describe and critique the existing models in relation to their strengths, limitations and reproducibility, and to determine the appropriate models for estimating and projecting the prevalence and disease burden of asthma. We will search the following electronic databases to identify relevant literature published from 1980 to 2017: Medline, Embase, WHO Library and Information Services and Web of Science Core Collection. We will identify additional studies by searching the reference list of all the retrieved papers and contacting experts. We will include observational studies that used models for estimating and/or projecting prevalence and disease burden of asthma regarding human population of any age and sex. Two independent reviewers will assess the studies for inclusion and extract data from included papers. Data items will include authors' names, publication year, study aims, data source and time period, study population, asthma outcomes, study methodology, model type, model settings, study variables, methods of model derivation, methods of parameter estimation and/or projection, model fit information, key findings and identified research gaps. A detailed critical narrative synthesis of the models will be undertaken in relation to their strengths, limitations and reproducibility. A quality assessment checklist and scoring framework will be used to determine the appropriate models for estimating and projecting the prevalence anddiseaseburden of asthma. We will not collect any primary data for this review, and hence there is no need for formal National Health Services Research Ethics Committee approval. We will present our findings at scientific conferences and publish the findings in the peer-reviewed scientific journal. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  16. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  17. Associations between personal exposures and ambient concentrations of nitrogen dioxide: A quantitative research synthesis

    NASA Astrophysics Data System (ADS)

    Meng, Q. Y.; Svendsgaard, D.; Kotchmar, D. J.; Pinto, J. P.

    2012-09-01

    Although positive associations between ambient NO2 concentrations and personal exposures have generally been found by exposure studies, the strength of the associations varied among studies. Differences in results could be related to differences in study design and in exposure factors. However, the effects of study design, exposure factors, and sampling and measurement errors on the strength of the personal-ambient associations have not been evaluated quantitatively in a systematic manner. A quantitative research synthesis was conducted to examine these issues based on peer-reviewed publications in the past 30 years. Factors affecting the strength of the personal-ambient associations across the studies were also examined with meta-regression. Ambient NO2 was found to be significantly associated with personal NO2 exposures, with estimates of 0.42, 0.16, and 0.72 for overall pooled, longitudinal and daily average correlation coefficients based on random-effects meta-analysis. This conclusion was robust after correction for publication bias with correlation coefficients of 0.37, 0.16 and 0.45. We found that season and some population characteristics, such as pre-existing disease, were significant factors affecting the strength of the personal-ambient associations. More meaningful and rigorous comparisons would be possible if greater detail were published on the study design (e.g. local and indoor sources, housing characteristics, etc.) and data quality (e.g., detection limits and percent of data above detection limits).

  18. SMC X-3: the closest ultraluminous X-ray source powered by a neutron star with non-dipole magnetic field

    NASA Astrophysics Data System (ADS)

    Tsygankov, S. S.; Doroshenko, V.; Lutovinov, A. A.; Mushtukov, A. A.; Poutanen, J.

    2017-09-01

    Aims: The magnetic field of accreting neutron stars determines their overall behavior including the maximum possible luminosity. Some models require an above-average magnetic field strength (≳1013 G) in order to explain super-Eddington mass accretion rate in the recently discovered class of pulsating ultraluminous X-ray sources (ULX). The peak luminosity of SMC X-3 during its major outburst in 2016-2017 reached 2.5 × 1039 erg s-1 comparable to that in ULXs thus making this source the nearest ULX-pulsar. Determination of the magnetic field of SMC X-3 is the main goal of this paper. Methods: SMC X-3 belongs to the class of transient X-ray pulsars with Be optical companions, and exhibited a giant outburst in July 2016-March 2017. The source has been observed over the entire outburst with the Swift/XRT and Fermi/GBM telescopes, as well as the NuSTAR observatory. Collected data allowed us to estimate the magnetic field strength of the neutron star in SMC X-3 using several independent methods. Results: Spin evolution of the source during and between the outbursts, and the luminosity of the transition to the so-called propeller regime in the range of (0.3-7) × 1035 erg s-1 imply a relatively weak dipole field of (1-5) × 1012 G. On the other hand, there is also evidence for a much stronger field in the immediate vicinity of the neutron star surface. In particular, transition from super- to sub-critical accretion regime associated with the cease of the accretion column and very high peak luminosity favor a field that is an order of magnitude stronger. This discrepancy makes SMC X-3 a good candidate for possessing significant non-dipolar components of the field, and an intermediate source between classical X-ray pulsars and accreting magnetars which may constitute an appreciable fraction of ULX population.

  19. Estimates of Radiation Effects on Cancer Risks in the Mayak Worker, Techa River and Atomic Bomb Survivor Studies.

    PubMed

    Preston, Dale L; Sokolnikov, Mikhail E; Krestinina, Lyudmila Yu; Stram, Daniel O

    2017-04-01

    For almost 50 y, the Life Span Study cohort of atomic bomb survivor studies has been the primary source of the quantitative estimates of cancer and non-cancer risks that form the basis of international radiation protection standards. However, the long-term follow-up and extensive individual dose reconstruction for the Russian Mayak worker cohort (MWC) and Techa River cohort (TRC) are providing quantitative information about radiation effects on cancer risks that complement the atomic bomb survivor-based risk estimates. The MWC, which includes ~26 000 men and women who began working at Mayak between 1948 and 1982, is the primary source for estimates of the effects of plutonium on cancer risks and also provides information on the effects of low-dose rate external gamma exposures. The TRC consists of ~30 000 men and women of all ages who received low-dose-rate, low-dose exposures as a consequence of Mayak's release of radioactive material into the Techa River. The TRC data are of interest because the exposures are broadly similar to those experienced by populations exposed as a consequence of nuclear accidents such as Chernobyl. In this presentation, it is described the strengths and limitations of these three cohorts, outline and compare recent solid cancer and leukemia risk estimates and discussed why information from the Mayak and Techa River studies might play a role in the development and refinement of the radiation risk estimates that form the basis for radiation protection standards. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Controlled and in situ target strengths of the jumbo squid Dosidicus gigas and identification of potential acoustic scattering sources.

    PubMed

    Benoit-Bird, Kelly J; Gilly, William F; Au, Whitlow W L; Mate, Bruce

    2008-03-01

    This study presents the first target strength measurements of Dosidicus gigas, a large squid that is a key predator, a significant prey, and the target of an important fishery. Target strength of live, tethered squid was related to mantle length with values standardized to the length squared of -62.0, -67.4, -67.9, and -67.6 dB at 38, 70, 120, and 200 kHz, respectively. There were relatively small differences in target strength between dorsal and anterior aspects and none between live and freshly dead squid. Potential scattering mechanisms in squid have been long debated. Here, the reproductive organs had little effect on squid target strength. These data support the hypothesis that the pen may be an important source of squid acoustic scattering. The beak, eyes, and arms, probably via the sucker rings, also play a role in acoustic scattering though their effects were small and frequency specific. An unexpected source of scattering was the cranium of the squid which provided a target strength nearly as high as that of the entire squid though the mechanism remains unclear. Our in situ measurements of the target strength of free-swimming squid support the use of the values presented here in D. gigas assessment studies.

  1. Sources of Cadmium Exposure Among Healthy Premenopausal Women

    PubMed Central

    Adams, Scott V.; Newcomb, Polly A.; Shafer, Martin M.; Atkinson, Charlotte; Aiello Bowles, Erin J.; Newton, Katherine M.; Lampe, Johanna W.

    2011-01-01

    Background Cadmium, a persistent and widespread environmental pollutant, has been associated with kidney function impairment and several diseases. Cigarettes are the dominant source of cadmium exposure among smokers; the primary source of cadmium in non-smokers is food. We investigated sources of cadmium exposure in a sample of healthy women. Methods In a cross-sectional study, 191 premenopausal women completed a health questionnaire and a food frequency questionnaire. The cadmium content of spot urine samples was measured with inductively-coupled plasma mass spectrometry and normalized to urine creatinine content. Multivariable linear regression was used to estimate the strength of association between smoking habits and, among non-smokers, usual foods consumed and urinary cadmium, adjusted for age, race, multivitamin and supplement use, education, estimated total energy intake, and parity. Results Geometric mean urine creatinine-normalized cadmium concentration (uCd) of women with any history of cigarette smoking was 0.43 μg/g (95% confidence interval (CI): 0.38–0.48 μg/g) and 0.30 μg/g (0.27–0.33 μg/g) among never-smokers, and increased with pack-years of smoking. Analysis of dietary data among women with no reported history of smoking suggested that regular consumption of eggs, hot cereals, organ meats, tofu, vegetable soups, leafy greens, green salad, and yams was associated with uCd. Consumption of tofu products showed the most robust association with uCd; each weekly serving of tofu was associated with a 22% (95% CI: 11–33%) increase in uCd. Thus, uCd was estimated to be 0.11 μg/g (95% CI: 0.06 – 0.15 μg/g ) higher among women who consumed any tofu than among those who consumed none. Conclusions Cigarette smoking is likely the most important source of cadmium exposure among smokers. Among non-smokers, consumption of specific foods, notably tofu, is associated with increased urine cadmium concentration. PMID:21333327

  2. Grinding damage assessment on four high-strength ceramics.

    PubMed

    Canneto, Jean-Jacques; Cattani-Lorente, Maria; Durual, Stéphane; Wiskott, Anselm H W; Scherrer, Susanne S

    2016-02-01

    The purpose of this study was to assess surface and subsurface damage on 4 CAD-CAM high-strength ceramics after grinding with diamond disks of 75 μm, 54 μm and 18 μm and to estimate strength losses based on damage crack sizes. The materials tested were: 3Y-TZP (Lava), dense Al2O3 (In-Ceram AL), alumina glass-infiltrated (In-Ceram ALUMINA) and alumina-zirconia glass-infiltrated (In-Ceram ZIRCONIA). Rectangular specimens with 2 mirror polished orthogonal sides were bonded pairwise together prior to degrading the top polished surface with diamond disks of either 75 μm, 54 μm or 18 μm. The induced chip damage was evaluated on the bonded interface using SEM for chip depth measurements. Fracture mechanics were used to estimate fracture stresses based on average and maximum chip depths considering these as critical flaws subjected to tension and to calculate possible losses in strength compared to manufacturer's data. 3Y-TZP was hardly affected by grinding chip damage viewed on the bonded interface. Average chip depths were of 12.7±5.2 μm when grinding with 75 μm diamond inducing an estimated loss of 12% in strength compared to manufacturer's reported flexural strength values of 1100 MPa. Dense alumina showed elongated chip cracks and was suffering damage of an average chip depth of 48.2±16.3 μm after 75 μm grinding, representing an estimated loss in strength of 49%. Grinding with 54 μm was creating chips of 32.2±9.1 μm in average, representing a loss in strength of 23%. Alumina glass-infiltrated ceramic was exposed to chipping after 75 μm (mean chip size=62.4±19.3 μm) and 54 μm grinding (mean chip size=42.8±16.6 μm), with respectively 38% and 25% estimated loss in strength. Alumina-zirconia glass-infiltrated ceramic was mainly affected by 75 μm grinding damage with a chip average size of 56.8±15.1 μm, representing an estimated loss in strength of 34%. All four ceramics were not exposed to critical chipping at 18 μm diamond grinding. Reshaping a ceramic framework post sintering should be avoided with final diamond grits of 75 μm as a general rule. For alumina and the glass-infiltrated alumina, using a 54 μm diamond still induces chip damage which may affect strength. Removal of such damage from a reshaped framework is mandatory by using sequentially finer diamonds prior to the application of veneering ceramics especially in critical areas such as margins, connectors and inner surfaces. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  3. Ultra-High Resolution Observations Of Selected Blazars

    NASA Astrophysics Data System (ADS)

    Hodgson, Jeffrey A.

    2015-01-01

    Active Galactic Nuclei are the luminous centres of active galaxies that produce powerful relativistic jets from central super massive black holes (SMBH). When these jets are oriented towards the observer's line-of-sight, they become very bright, very variable and very energetic. These sources are known as blazars and Very Long Baseline Interferometry (VLBI) provides a direct means of observing into the heart of these objects. VLBI performed at 3 mm with the Global mm-VLBI Array (GMVA) and 7 mm VLBI performed with the Very Long Baseline Array (VLBA), allows some of the highest angular resolution images of blazars to be produced. In this thesis, we present the first results of an ongoing monitoring program of blazars known to emit at γ-ray energies. The physical processes that produce these jets and the γ-ray emission are still not well known. The jets are thought to be produced by converting gravitational energy around the black hole into relativistic particles that are accelerated away at near the speed of light. However, the exact mechanisms for this and the role that magnetic fields play is not fully clear. Similarly, γ-rays have been long known to have been emitted from blazars and that their production is often related to the up-scattering of synchrotron radiation from the jet. However, the origin of seed photons for the up-scattering (either from within the jet itself or from an external photon field) and the location of the γ-ray emission regions has remained inconclusive. In this thesis, we aim to describe the likely location of γ-ray emission in jets, the physical structure of blazar jets, the location of the VLBI features relative to the origin of the jet and the nature of the magnetic field, both of the VLBI scale jet and in the region where the jet is produced. We present five sources that have been monitored at 3 mm using the GMVA from 2008 until 2012. These sources have been analysed with near-in-time 7 mm maps from the Very Long Baseline Array (VLBA), γ-ray light curves from the Fermi/LAT space telescope and cm to mm-wave total-intensity light curves. In one source, OJ 287, the source has additionally been analysed with monthly imaging at 7 mm with the VLBA and near-in-time 2 cm VLBI maps. We use these resources to analyse high angular resolution structural and spectral changes and see if they correlate with flaring (both radio and γ-ray) activity and with VLBI component ejections. By spectrally decomposing sources, we can determine the spatially resolved magnetic field structure in the jets at the highest yet performed resolutions and at frequencies that are near or above the turnover frequency for synchrotron self-absorption (SSA). We compute the magnetic field estimates from SSA theory and by assuming equipartition between magnetic fields and relativistic particle energies. All sources analysed exhibit downstream quasi-stationary features which sometimes exhibit higher brightness temperatures and flux density variability than the VLBI "core", which we interpret as being recollimation or oblique shocks. We find that γ-ray flaring, mm-wave radio flaring and changes in opacity from optically thick to optically thin, is in many cases consistent with component ejections past both the VLBI "core" and these quasi-stationary downstream features. We find decreasing apparent brightness temperatures and Doppler factors as a function of increased "core" separation, which is interpreted as consistent with a slowly accelerating jet over the de-projected inner ˜10-20 pc. Assuming equipartition between magnetic energy and relativistic particle energy, the magnetic field strengths within the jets at these scales are, on average, between B ˜ 0.3 - 0.9 G, with the highest strengths found within the VLBI "core". From the observed gradient in magnetic field strengths, we can place the mmwave "core" ˜1-3 pc downstream of the base of the jet. Additionally, we estimate the the magnetic field is Bapex ˜ 3000 - 18000 G at the base of the jet. We computed theoretical estimates based on jet production under magnetically arrested disks (MAD) and find our estimates to be consistent. In the BL Lac source OJ 287, we included monthly 7 mm and near-in-time 2 cm VLBA maps to provide full kinematics and increased spectral coverage. Following a previously reported radical change in inner-jet PA of ˜100° we find unusually discrepant PAs compared with the previous jet direction, that follow very different trajectories. The source exhibits a downstream quasi-stationary feature that at times has higher brightness temperatures than the "core". The source also exhibited a large change in apparent component speeds as compared with previous epochs, which we propose could be due to changes in jet pressure causing changes in the location of downstream recollimation or oblique shocks and hence their line-of-sight viewing angle. The addition of 2 cm VLBA data allows for a comparison of magnetic fields derived from SSA and equipartition. The magnetic field estimates are consistent within 20%, with BSSA ≥ 1.6 G and Bequi ≥ 1.2 G in the "core" and BSSA ≤ 0.4 G and Bequi ≤ 0.3 G in the stationary feature. Gamma-ray emission appears to originate in the "core" and the stationary feature. The decrease in magnetic field strengths places the mmwave "core' downstream of the jet base by ≤6 pc and likely outside of the broad line region (BLR). This, combined with the results in other sources are consistent with γ-rays being produced in the vicinity of the VLBI "core" of in further downstream stationary features, which are likely over a parsec downstream of the central black hole, favouring the scenario of photons being up-scattered within the relativistic jet.

  4. Dempster-Shafer theory applied to regulatory decision process for selecting safer alternatives to toxic chemicals in consumer products.

    PubMed

    Park, Sung Jin; Ogunseitan, Oladele A; Lejano, Raul P

    2014-01-01

    Regulatory agencies often face a dilemma when regulating chemicals in consumer products-namely, that of making decisions in the face of multiple, and sometimes conflicting, lines of evidence. We present an integrative approach for dealing with uncertainty and multiple pieces of evidence in toxics regulation. The integrative risk analytic framework is grounded in the Dempster-Shafer (D-S) theory that allows the analyst to combine multiple pieces of evidence and judgments from independent sources of information. We apply the integrative approach to the comparative risk assessment of bisphenol-A (BPA)-based polycarbonate and the functionally equivalent alternative, Eastman Tritan copolyester (ETC). Our results show that according to cumulative empirical evidence, the estimated probability of toxicity of BPA is 0.034, whereas the toxicity probability for ETC is 0.097. However, when we combine extant evidence with strength of confidence in the source (or expert judgment), we are guided by a richer interval measure, (Bel(t), Pl(t)). With the D-S derived measure, we arrive at various intervals for BPA, with the low-range estimate at (0.034, 0.250), and (0.097,0.688) for ETC. These new measures allow a reasonable basis for comparison and a justifiable procedure for decision making that takes advantage of multiple sources of evidence. Through the application of D-S theory to toxicity risk assessment, we show how a multiplicity of scientific evidence can be converted into a unified risk estimate, and how this information can be effectively used for comparative assessments to select potentially less toxic alternative chemicals. © 2013 SETAC.

  5. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  6. Potential applicability of stress wave velocity method on pavement base materials as a non-destructive testing technique

    NASA Astrophysics Data System (ADS)

    Mahedi, Masrur

    Aggregates derived from natural sources have been used traditionally as the pavement base materials. But in recent times, the extraction of these natural aggregates has become more labor intensive and costly due to resource depletion and environmental concerns. Thus, the uses of recycled aggregates as the supplementary of natural aggregates are increasing considerably in pavement construction. Use of recycled aggregates such as recycled crushed concrete (RCA) and recycled asphalt pavement (RAP) reduces the rate of natural resource depletion, construction debris and cost. Although recycled aggregates could be used as a viable alternative of conventional base materials, strength characteristics and product variability limit their utility to a great extent. Hence, their applicability is needed to be evaluated extensively based on strength, stiffness and cost factors. But for extensive evaluation, traditionally practiced test methods are proven to be unreasonable in terms of time, cost, reliability and applicability. On the other hand, rapid non-destructive methods have the potential to be less time consuming and inexpensive along with the low variability of test results; therefore improving the reliability of estimated performance of the pavement. In this research work, the experimental program was designed to assess the potential application of stress wave velocity method as a non-destructive test in evaluating recycled base materials. Different combinations of cement treated recycled concrete aggregate (RAP) and recycled crushed concrete (RCA) were used to evaluate the applicability of stress wave velocity method. It was found that, stress wave velocity method is excellent in characterizing the strength and stiffness properties of cement treated base materials. Statistical models, based on P-wave velocity were derived for predicting the modulus of elasticity and compressive strength of different combinations of cement treated RAP, Grade-1 and Grade-2 materials. Two, three and four parameter modeling were also done for characterizing the resilient modulus response. It is anticipated that, derived correlations can be useful in estimating the strength and stiffness response of cement treated base materials with satisfactory level of confidence, if the P-wave velocity remains within the range of 500 ft/sec to 1500 ft/sec.

  7. Comments on extracting the resonance strength parameter from yield data

    DOE PAGES

    Croft, Stephen; Favalli, Andrea

    2015-06-23

    The F(α,n) reaction is the focus of on-going research in part because it is an important source of neutrons in the nuclear fuel cycle which can be exploited to assay nuclear materials, especially uranium in the form of UF 6. At the present time there remains some considerable uncertainty (of the order of ± 20%) in the thick target integrated over angle (α,n) yield from 19F (100% natural abundance) and its compounds as discussed. An important thin target cross-section measurement is that of Wrean and Kavanagh who explore the region from below threshold (2.36 MeV) to approximately 3.1 MeV withmore » fine energy resolution. Integration of their cross-section data over the slowing down history of a stopping α-particle allows the thick target yield to be calculated for incident energies up to 3.1 MeV. This trend can then be combined with data from other sources to obtain a thick target yield curve over the wider range of interest to the fuel cycle (roughly threshold to 10 MeV to include all relevant α-emitters). To estimate the thickness of the CaF 2 target they used, Wrean and Kavanagh separately measured the integrated yield of the 6.129 MeV γ-rays from the resonance at 340.5 keV (laboratory α-particle kinetic energy) in the 19F(p,αγ) reaction. To interpret the data they adopted a resonance strength parameter of (22.3 ± 0.8) eV based on a determination by Becker et al. The value and its uncertainty directly affects the thickness estimate and the extracted (α,n) cross-section values. In their citation to Becker et al's work, Wrean and Kavanagh comment that they did not make use of an alternative value of (23.7±1.0) eV reported by Croft because they were unable to reproduce the value from the data given in that paper. The value they calculated for the resonance strength from the thick target yield given by Croft was 21.4 eV. The purpose of this communication is to revisit the paper by Croft published in this journal and specifically to explain the origin of the reported resonance strength. Fortunately the original notes spanning the period 12 January 1988 to 16 January 1990 were available to consult. Finally, in hindsight there is certainly a case of excessive brevity to rectify. In essence the step requiring explanation is how to compute the resonance strength, ω γ, from the reported thick target resonance yield Y.« less

  8. Prediction and Estimation of Scaffold Strength with different pore size

    NASA Astrophysics Data System (ADS)

    Muthu, P.; Mishra, Shubhanvit; Sri Sai Shilpa, R.; Veerendranath, B.; Latha, S.

    2018-04-01

    This paper emphasizes the significance of prediction and estimation of the mechanical strength of 3D functional scaffolds before the manufacturing process. Prior evaluation of the mechanical strength and structural properties of the scaffold will reduce the cost fabrication and in fact ease up the designing process. Detailed analysis and investigation of various mechanical properties including shear stress equivalence have helped to estimate the effect of porosity and pore size on the functionality of the scaffold. The influence of variation in porosity was examined by computational approach via finite element analysis (FEA) and ANSYS application software. The results designate the adequate perspective of the evolutionary method for the regulation and optimization of the intricate engineering design process.

  9. Effect of Increased Intensity of Physiotherapy on Patient Outcomes After Stroke: An Economic Literature Review and Cost-Effectiveness Analysis

    PubMed Central

    Chan, B

    2015-01-01

    Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241

  10. Galactic X-ray emission from pulsars

    NASA Technical Reports Server (NTRS)

    Harding, A. K.

    1981-01-01

    The contribution of pulsars to the gamma-ray flux from the galactic plane is examined using data from the most recent pulsar surveys. It is assumed that pulsar gamma-rays are produced by curvature radiation from relativistic particles above the polar cap and attenuated by pair production in the strong magnetic and electric fields. Assuming that all pulsars produce gamma-rays in this way, their luminosities can be predicted as a function of period and magnetic field strength. Using the distribution of pulsars in the galaxy as determined from data on 328 pulsars detected in three surveys, the local gamma-ray production spectrum, the longitude profile, and the latitude profile of pulsar gamma-ray flux are calculated. The largest sources of uncertainty in the size of the pulsar contribution are the value of the mean interstellar electron density, the turnover in the pulsar radio luminosity function, and the average pulsar magnetic field strength. A present estimate is that pulsars contribute from 15 to 20 % of the total flux of gamma-rays from the galactic plane.

  11. Magnetic studies on Shergotty and other SNC meteorites

    NASA Technical Reports Server (NTRS)

    Cisowski, S. M.

    1986-01-01

    The results of a study of basic magnetic properties of meteorites within the SNC group, including the four known shergottites and two nakhlites, are presented. An estimate is made of the strength of the magnetic field which produced the remanent magnetization of the Shergotty meteorite, for the purpose of constraining the choices for the parent body of these SNC meteorites. Remanence measurements in several subsamples of Shergotty and Zagami meteorites reveal a large variation in intensity that does not seem to be related to the abundance of remanence carriers. The other meteorites carry only weak remanence, suggesting weak magnetizing fields as the source of their magnetic signal. A paleointensity experiment on a weakly magnetized subsample of Shergotty revealed a low temperature component of magnetization acquired in a field of 2000 gammas, and a high temperature component reflecting a paleofield strength of between 250 and 1000 gammas. The weak field environment that these meteorites seem to reflect is consistent with either a Martian or asteroidal origin, but inconsistent with a terrestrial origin.

  12. Reconciling Models of Luminous Blazars with Magnetic Fluxes Determined by Radio Core-shift Measurements

    NASA Astrophysics Data System (ADS)

    Nalewajko, Krzysztof; Sikora, Marek; Begelman, Mitchell C.

    2014-11-01

    Estimates of magnetic field strength in relativistic jets of active galactic nuclei, obtained by measuring the frequency-dependent radio core location, imply that the total magnetic fluxes in those jets are consistent with the predictions of the magnetically arrested disk (MAD) scenario of jet formation. On the other hand, the magnetic field strength determines the luminosity of the synchrotron radiation, which forms the low-energy bump of the observed blazar spectral energy distribution (SED). The SEDs of the most powerful blazars are strongly dominated by the high-energy bump, which is most likely due to the external radiation Compton mechanism. This high Compton dominance may be difficult to reconcile with the MAD scenario, unless (1) the geometry of external radiation sources (broad-line region, hot-dust torus) is quasi-spherical rather than flat, or (2) most gamma-ray radiation is produced in jet regions of low magnetization, e.g., in magnetic reconnection layers or in fast jet spines.

  13. RECONCILING MODELS OF LUMINOUS BLAZARS WITH MAGNETIC FLUXES DETERMINED BY RADIO CORE-SHIFT MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nalewajko, Krzysztof; Begelman, Mitchell C.; Sikora, Marek, E-mail: knalew@stanford.edu

    2014-11-20

    Estimates of magnetic field strength in relativistic jets of active galactic nuclei, obtained by measuring the frequency-dependent radio core location, imply that the total magnetic fluxes in those jets are consistent with the predictions of the magnetically arrested disk (MAD) scenario of jet formation. On the other hand, the magnetic field strength determines the luminosity of the synchrotron radiation, which forms the low-energy bump of the observed blazar spectral energy distribution (SED). The SEDs of the most powerful blazars are strongly dominated by the high-energy bump, which is most likely due to the external radiation Compton mechanism. This high Comptonmore » dominance may be difficult to reconcile with the MAD scenario, unless (1) the geometry of external radiation sources (broad-line region, hot-dust torus) is quasi-spherical rather than flat, or (2) most gamma-ray radiation is produced in jet regions of low magnetization, e.g., in magnetic reconnection layers or in fast jet spines.« less

  14. A practical tool for maximal information coefficient analysis.

    PubMed

    Albanese, Davide; Riccadonna, Samantha; Donati, Claudio; Franceschi, Pietro

    2018-04-01

    The ability of finding complex associations in large omics datasets, assessing their significance, and prioritizing them according to their strength can be of great help in the data exploration phase. Mutual information-based measures of association are particularly promising, in particular after the recent introduction of the TICe and MICe estimators, which combine computational efficiency with superior bias/variance properties. An open-source software implementation of these two measures providing a complete procedure to test their significance would be extremely useful. Here, we present MICtools, a comprehensive and effective pipeline that combines TICe and MICe into a multistep procedure that allows the identification of relationships of various degrees of complexity. MICtools calculates their strength assessing statistical significance using a permutation-based strategy. The performances of the proposed approach are assessed by an extensive investigation in synthetic datasets and an example of a potential application on a metagenomic dataset is also illustrated. We show that MICtools, combining TICe and MICe, is able to highlight associations that would not be captured by conventional strategies.

  15. Shear Behavior Models of Steel Fiber Reinforced Concrete Beams Modifying Softened Truss Model Approaches

    PubMed Central

    Hwang, Jin-Ha; Lee, Deuck Hang; Ju, Hyunjin; Kim, Kang Su; Seo, Soo-Yeon; Kang, Joo-Won

    2013-01-01

    Recognizing that steel fibers can supplement the brittle tensile characteristics of concrete, many studies have been conducted on the shear performance of steel fiber reinforced concrete (SFRC) members. However, previous studies were mostly focused on the shear strength and proposed empirical shear strength equations based on their experimental results. Thus, this study attempts to estimate the strains and stresses in steel fibers by considering the detailed characteristics of steel fibers in SFRC members, from which more accurate estimation on the shear behavior and strength of SFRC members is possible, and the failure mode of steel fibers can be also identified. Four shear behavior models for SFRC members have been proposed, which have been modified from the softened truss models for reinforced concrete members, and they can estimate the contribution of steel fibers to the total shear strength of the SFRC member. The performances of all the models proposed in this study were also evaluated by a large number of test results. The contribution of steel fibers to the shear strength varied from 5% to 50% according to their amount, and the most optimized volume fraction of steel fibers was estimated as 1%–1.5%, in terms of shear performance. PMID:28788364

  16. Global model for the lithospheric strength and effective elastic thickness

    NASA Astrophysics Data System (ADS)

    Tesauro, Magdala; Kaban, Mikhail K.; Cloetingh, Sierd A. P. L.

    2013-08-01

    Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young modulus (E) within the lithosphere. In view of the large uncertainties affecting strength estimates, we evaluate global strength and Te distributions for possible end-member 'hard' (HRM) and a 'soft' (SRM) rheology models of the continental crust. Temperature within the lithosphere has been estimated using a recent tomography model of Ritsema et al. (2011), which has much higher horizontal resolution than previous global models. Most of the strength is localized in the crust for the HRM and in the mantle for the SRM. These results contribute to the long debates on applicability of the "crème brulée" or "jelly-sandwich" model for the lithosphere structure. Changing from the SRM to HRM turns most of the continental areas from the totally decoupled mode to the fully coupled mode of the lithospheric layers. However, in the areas characterized by a high thermal regime and thick crust, the layers remain decoupled even for the HRM. At the same time, for the inner part of the cratons the lithospheric layers are coupled in both models. Therefore, rheological variations lead to large changes in the integrated strength and Te distribution in the regions characterized by intermediate thermal conditions. In these areas temperature uncertainties have a greater effect, since this parameter principally determines rheological behavior. Comparison of the Te estimates for both models with those determined from the flexural loading and spectral analysis shows that the 'hard' rheology is likely applicable for cratonic areas, whereas the 'soft' rheology is more representative for young orogens.

  17. Gravitational wave searches using the DSN (Deep Space Network)

    NASA Technical Reports Server (NTRS)

    Nelson, S. J.; Armstrong, J. W.

    1988-01-01

    The Deep Space Network Doppler spacecraft link is currently the only method available for broadband gravitational wave searches in the 0.01 to 0.001 Hz frequency range. The DSN's role in the worldwide search for gravitational waves is described by first summarizing from the literature current theoretical estimates of gravitational wave strengths and time scales from various astrophysical sources. Current and future detection schemes for ground based and space based detectors are then discussed. Past, present, and future planned or proposed gravitational wave experiments using DSN Doppler tracking are described. Lastly, some major technical challenges to improve gravitational wave sensitivities using the DSN are discussed.

  18. Women with previous fragility fractures can be classified based on bone microarchitecture and finite element analysis measured with HR-pQCT.

    PubMed

    Nishiyama, K K; Macdonald, H M; Hanley, D A; Boyd, S K

    2013-05-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) measurements of distal radius and tibia bone microarchitecture and finite element (FE) estimates of bone strength performed well at classifying postmenopausal women with and without previous fracture. The HR-pQCT measurements outperformed dual energy x-ray absorptiometry (DXA) at classifying forearm fractures and fractures at other skeletal sites. Areal bone mineral density (aBMD) is the primary measurement used to assess osteoporosis and fracture risk; however, it does not take into account bone microarchitecture, which also contributes to bone strength. Thus, our objective was to determine if bone microarchitecture measured with HR-pQCT and FE estimates of bone strength could classify women with and without low-trauma fractures. We used HR-pQCT to assess bone microarchitecture at the distal radius and tibia in 44 postmenopausal women with a history of low-trauma fracture and 88 age-matched controls from the Calgary cohort of the Canadian Multicentre Osteoporosis Study (CaMos) study. We estimated bone strength using FE analysis and simulated distal radius aBMD from the HR-pQCT scans. Femoral neck (FN) and lumbar spine (LS) aBMD were measured with DXA. We used support vector machines (SVM) and a tenfold cross-validation to classify the fracture cases and controls and to determine accuracy. The combination of HR-pQCT measures of microarchitecture and FE estimates of bone strength had the highest area under the receiver operating characteristic (ROC) curve of 0.82 when classifying forearm fractures compared to an area under the curve (AUC) of 0.71 from DXA-derived aBMD of the forearm and 0.63 from FN and spine DXA. For all fracture types, FE estimates of bone strength at the forearm alone resulted in an AUC of 0.69. Models based on HR-pQCT measurements of bone microarchitecture and estimates of bone strength performed better than DXA-derived aBMD at classifying women with and without prior fracture. In future, these models may improve prediction of individuals at risk of low-trauma fracture.

  19. Earthquake source parameters of repeating microearthquakes at Parkfield, CA, determined using the SAFOD Pilot Hole seismic array

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Ellsworth, W. L.

    2005-12-01

    We determined source parameters of repeating microearthquakes occurring at Parkfield, CA, using the SAFOD Pilot Hole seismic array. To estimate reliable source parameters, we used the empirical Green's function (EGF) deconvolution method which removes the attenuation effects and site responses by taking the spectral amplitude ratio between the spectra of the two colocated events. For earthquakes during the period from December 2002 to October 2003 whose S-P time differences are less than 1 s, we detected 34 events that classified into 14 groups. Moment magnitudes range from -0.3 to 2.1. These data were recorded at a sampling rate of 2 kHz. The dataset includes two SAFOD target repeating earthquakes which occurred on October 2003. In general, the deconvolution procedure is an unstable process, especially for higher frequencies, because small location differences result in the profound effects on the spectral ratio. This leads to large uncertainties in the estimations of corner frequencies. According to Chaverria et al. [2003], the wavetrain recorded in the Pilot Hole is dominated by reflections and conversions and not random coda waves. So, we expect that the spectral ratios of the waves between P and S wave will also reflect the source, as will the waves following S wave. We compared spectral ratios calculated from the direct waves with those from other parts of the wavetrain, and confirmed that they showed similar shapes. Therefore it is possible to obtain a more robust measure of spectral ratio by stacking the ratios calculated from shorter moving windows taken along the record following the direct waves. We further stacked all ratios obtained from each level of the array. The stacked spectral ratios were inverted for corner frequencies assuming the omega-square model. We determined static stress drops from those corner frequencies assuming a circular crack model. We also calculated apparent stresses for each event by considering frequency dependent attenuation, where the average difference between the observed and the calculated omega-square model was assumed to represent the path and site effects. The estimated static stress drops are high, mostly in excess of 10 MPa and with some above 50 MPa. It should be noted that the highest value is near the strength of the rock. Apparent stresses range from 0.4 to 20 MPa, at the high end of the range of those reported by other studies. According to an asperity model [e.g., McGarr, 1981; Johnson & Nadeau, 2002], the small strong asperity patch is surrounded by a much weaker fault that creeps under the influence of tectonic stress. When the asperity patch ruptures, the surrounding area slips as it is dynamically loaded by the stress release of the asperity patch. If so, our estimated source dimensions seem to correspond to the size of the area surrounding the asperity patch, and the stress drops of the asperity might be much higher than our estimations. Although this is consistent with the hypocesis of Nadeau and Johnson [1998], it is unlikely that the stress drops exceed the strength of the rock. We should re-examine the asperity model based on the results obtained in this study.

  20. Orthodontic brackets removal under shear and tensile bond strength resistance tests - a comparative test between light sources

    NASA Astrophysics Data System (ADS)

    Silva, P. C. G.; Porto-Neto, S. T.; Lizarelli, R. F. Z.; Bagnato, V. S.

    2008-03-01

    We have investigated if a new LEDs system has enough efficient energy to promote efficient shear and tensile bonding strength resistance under standardized tests. LEDs 470 ± 10 nm can be used to photocure composite during bracket fixation. Advantages considering resistance to tensile and shear bonding strength when these systems were used are necessary to justify their clinical use. Forty eight human extracted premolars teeth and two light sources were selected, one halogen lamp and a LEDs system. Brackets for premolar were bonded through composite resin. Samples were submitted to standardized tests. A comparison between used sources under shear bonding strength test, obtained similar results; however, tensile bonding test showed distinct results: a statistical difference at a level of 1% between exposure times (40 and 60 seconds) and even to an interaction between light source and exposure time. The best result was obtained with halogen lamp use by 60 seconds, even during re-bonding; however LEDs system can be used for bonding and re-bonding brackets if power density could be increased.

  1. What is preexisting strength? Predicting free association probabilities, similarity ratings, and cued recall probabilities.

    PubMed

    Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B

    2005-08-01

    Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength.

  2. Respiratory Muscle Strength Predicts Decline in Mobility in Older Persons

    PubMed Central

    Buchman, A.S.; Boyle, P.A.; Wilson, R.S.; Leurgans, S.; Shah, R.C.; Bennett, D.A.

    2008-01-01

    Objectives To test the hypothesis that respiratory muscle strength is associated with the rate of change in mobility even after controlling for leg strength and physical activity. Methods Prospective study of 890 ambulatory older persons without dementia who underwent annual clinical evaluations to examine change in the rate of mobility over time. Results In a linear mixed-effect model adjusted for age, sex, and education, mobility declined about 0.12 unit/year, and higher levels of respiratory muscle strength were associated with a slower rate of mobility decline (estimate 0.043, SE 0.012, p < 0.001). Respiratory muscle strength remained associated with the rate of change in mobility even after controlling for lower extremity strength (estimate 0.036, SE 0.012, p = 0.004). In a model that included terms for respiratory muscle strength, lower extremity strength and physical activity together, all three were independent predictors of mobility decline in older persons. These associations remained significant even after controlling for body composition, global cognition, the development of dementia, parkinsonian signs, possible pulmonary disease, smoking, joint pain and chronic diseases. Conclusion Respiratory muscle strength is associated with mobility decline in older persons independent of lower extremity strength and physical activity. Clinical interventions to improve respiratory muscle strength may decrease the burden of mobility impairment in the elderly. PMID:18784416

  3. Impact of slowdown of Atlantic overturning circulation on heat and freshwater transports

    NASA Astrophysics Data System (ADS)

    Kelly, Kathryn A.; Drushka, Kyla; Thompson, LuAnne; Le Bars, Dewi; McDonagh, Elaine L.

    2016-07-01

    Recent measurements of the strength of the Atlantic overturning circulation at 26°N show a 1 year drop and partial recovery amid a gradual weakening. To examine the extent and impact of the slowdown on basin wide heat and freshwater transports for 2004-2012, a box model that assimilates hydrographic and satellite observations is used to estimate heat transport and freshwater convergence as residuals of the heat and freshwater budgets. Using an independent transport estimate, convergences are converted to transports, which show a high level of spatial coherence. The similarity between Atlantic heat transport and the Agulhas Leakage suggests that it is the source of the surface heat transport anomalies. The freshwater budget in the North Atlantic is dominated by a decrease in freshwater flux. The increasing salinity during the slowdown supports modeling studies that show that heat, not freshwater, drives trends in the overturning circulation in a warming climate.

  4. Estimation of metal strength at very high rates using free-surface Richtmyer–Meshkov Instabilities

    DOE PAGES

    Prime, Michael Bruce; Buttler, William Tillman; Buechler, Miles Allen; ...

    2017-03-08

    Recently, Richtmyer–Meshkov Instabilities (RMI) have been proposed for studying the average strength at strain rates up to at least 10 7/s. RMI experiments involve shocking a metal interface that has initial sinusoidal perturbations. The perturbations invert and grow subsequent to shock and may arrest because of strength effects. In this work we present new RMI experiments and data on a copper target that had five regions with different perturbation amplitudes on the free surface opposite the shock. We estimate the high-rate, low-pressure copper strength by comparing experimental data with Lagrangian numerical simulations. From a detailed computational study we find thatmore » mesh convergence must be carefully addressed to accurately compare with experiments, and numerical viscosity has a strong influence on convergence. We also find that modeling the as-built perturbation geometry rather than the nominal makes a significant difference. Because of the confounding effect of tensile damage on total spike growth, which has previously been used as the metric for estimating strength, we instead use a new strength metric: the peak velocity during spike growth. Furthermore, this new metric also allows us to analyze a broader set of experimental results that are sensitive to strength because some larger initial perturbations grow unstably to failure and so do not have a finite total spike growth.« less

  5. Estimation of metal strength at very high rates using free-surface Richtmyer–Meshkov Instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prime, Michael Bruce; Buttler, William Tillman; Buechler, Miles Allen

    Recently, Richtmyer–Meshkov Instabilities (RMI) have been proposed for studying the average strength at strain rates up to at least 10 7/s. RMI experiments involve shocking a metal interface that has initial sinusoidal perturbations. The perturbations invert and grow subsequent to shock and may arrest because of strength effects. In this work we present new RMI experiments and data on a copper target that had five regions with different perturbation amplitudes on the free surface opposite the shock. We estimate the high-rate, low-pressure copper strength by comparing experimental data with Lagrangian numerical simulations. From a detailed computational study we find thatmore » mesh convergence must be carefully addressed to accurately compare with experiments, and numerical viscosity has a strong influence on convergence. We also find that modeling the as-built perturbation geometry rather than the nominal makes a significant difference. Because of the confounding effect of tensile damage on total spike growth, which has previously been used as the metric for estimating strength, we instead use a new strength metric: the peak velocity during spike growth. Furthermore, this new metric also allows us to analyze a broader set of experimental results that are sensitive to strength because some larger initial perturbations grow unstably to failure and so do not have a finite total spike growth.« less

  6. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    NASA Astrophysics Data System (ADS)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; Church, Sarah E.; Wechsler, Risa H.

    2017-09-01

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN-halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for our fiducial model, which is based on current understanding of the galaxy-halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.

  7. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for ourmore » fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  8. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  9. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE PAGES

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; ...

    2017-08-31

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  10. Utility of correlation techniques in gravity and magnetic interpretation

    NASA Technical Reports Server (NTRS)

    Chandler, V. W.; Koski, J. S.; Braice, L. W.; Hinze, W. J.

    1977-01-01

    Internal correspondence uses Poisson's Theorem in a moving-window linear regression analysis between the anomalous first vertical derivative of gravity and total magnetic field reduced to the pole. The regression parameters provide critical information on source characteristics. The correlation coefficient indicates the strength of the relation between magnetics and gravity. Slope value gives delta j/delta sigma estimates of the anomalous source. The intercept furnishes information on anomaly interference. Cluster analysis consists of the classification of subsets of data into groups of similarity based on correlation of selected characteristics of the anomalies. Model studies are used to illustrate implementation and interpretation procedures of these methods, particularly internal correspondence. Analysis of the results of applying these methods to data from the midcontinent and a transcontinental profile shows they can be useful in identifying crustal provinces, providing information on horizontal and vertical variations of physical properties over province size zones, validating long wavelength anomalies, and isolating geomagnetic field removal problems.

  11. Vertebral Volumetric Bone Density and Strength Are Impaired in Women With Low-Weight and Atypical Anorexia Nervosa

    PubMed Central

    Bachmann, Katherine N.; Schorr, Melanie; Bruno, Alexander G.; Bredella, Miriam A.; Lawson, Elizabeth A.; Gill, Corey M.; Singhal, Vibha; Meenaghan, Erinne; Gerweck, Anu V.; Slattery, Meghan; Eddy, Kamryn T.; Ebrahimi, Seda; Koman, Stuart L.; Greenblatt, James M.; Keane, Robert J.; Weigel, Thomas; Misra, Madhusmita; Bouxsein, Mary L.; Klibanski, Anne

    2017-01-01

    Context: Areal bone mineral density (BMD) is lower, particularly at the spine, in low-weight women with anorexia nervosa (AN). However, little is known about vertebral integral volumetric BMD (Int.vBMD) or vertebral strength across the AN weight spectrum, including “atypical” AN [body mass index (BMI) ≥18.5 kg/m2]. Objective: To investigate Int.vBMD and vertebral strength, and their determinants, across the AN weight spectrum Design: Cross-sectional observational study Setting: Clinical research center Participants: 153 women (age 18 to 45): 64 with low-weight AN (BMI <18.5 kg/m2; 58% amenorrheic), 44 with atypical AN (18.5≤BMI<23 kg/m2; 30% amenorrheic), 45 eumenorrheic controls (19.2≤BMI<25 kg/m2). Measures: Int.vBMD and cross-sectional area (CSA) by quantitative computed tomography of L4; estimated vertebral strength (derived from Int.vBMD and CSA) Results: Int.vBMD and estimated vertebral strength were lowest in low-weight AN, intermediate in atypical AN, and highest in controls. CSA did not differ between groups; thus, vertebral strength (calculated using Int.vBMD and CSA) was driven by Int.vBMD. In AN, Int.vBMD and vertebral strength were associated positively with current BMI and nadir lifetime BMI (independent of current BMI). Int.vBMD and vertebral strength were lower in AN with current amenorrhea and longer lifetime amenorrhea duration. Among amenorrheic AN, Int.vBMD and vertebral strength were associated positively with testosterone. Conclusions: Int.vBMD and estimated vertebral strength (driven by Int.vBMD) are impaired across the AN weight spectrum and are associated with low BMI and endocrine dysfunction, both current and previous. Women with atypical AN experience diminished vertebral strength, partially due to prior low-weight and/or amenorrhea. Lack of current low-weight or amenorrhea in atypical AN does not preclude compromise of vertebral strength. PMID:27732336

  12. Vertebral Volumetric Bone Density and Strength Are Impaired in Women With Low-Weight and Atypical Anorexia Nervosa.

    PubMed

    Bachmann, Katherine N; Schorr, Melanie; Bruno, Alexander G; Bredella, Miriam A; Lawson, Elizabeth A; Gill, Corey M; Singhal, Vibha; Meenaghan, Erinne; Gerweck, Anu V; Slattery, Meghan; Eddy, Kamryn T; Ebrahimi, Seda; Koman, Stuart L; Greenblatt, James M; Keane, Robert J; Weigel, Thomas; Misra, Madhusmita; Bouxsein, Mary L; Klibanski, Anne; Miller, Karen K

    2017-01-01

    Areal bone mineral density (BMD) is lower, particularly at the spine, in low-weight women with anorexia nervosa (AN). However, little is known about vertebral integral volumetric BMD (Int.vBMD) or vertebral strength across the AN weight spectrum, including "atypical" AN [body mass index (BMI) ≥18.5 kg/m2]. To investigate Int.vBMD and vertebral strength, and their determinants, across the AN weight spectrum. Cross-sectional observational study. Clinical research center. 153 women (age 18 to 45): 64 with low-weight AN (BMI <18.5 kg/m2; 58% amenorrheic), 44 with atypical AN (18.5≤BMI<23 kg/m2; 30% amenorrheic), 45 eumenorrheic controls (19.2≤BMI<25 kg/m2). Int.vBMD and cross-sectional area (CSA) by quantitative computed tomography of L4; estimated vertebral strength (derived from Int.vBMD and CSA). Int.vBMD and estimated vertebral strength were lowest in low-weight AN, intermediate in atypical AN, and highest in controls. CSA did not differ between groups; thus, vertebral strength (calculated using Int.vBMD and CSA) was driven by Int.vBMD. In AN, Int.vBMD and vertebral strength were associated positively with current BMI and nadir lifetime BMI (independent of current BMI). Int.vBMD and vertebral strength were lower in AN with current amenorrhea and longer lifetime amenorrhea duration. Among amenorrheic AN, Int.vBMD and vertebral strength were associated positively with testosterone. Int.vBMD and estimated vertebral strength (driven by Int.vBMD) are impaired across the AN weight spectrum and are associated with low BMI and endocrine dysfunction, both current and previous. Women with atypical AN experience diminished vertebral strength, partially due to prior low-weight and/or amenorrhea. Lack of current low-weight or amenorrhea in atypical AN does not preclude compromise of vertebral strength. Copyright © 2017 by the Endocrine Society

  13. Keeping an eye on the ring: COMS plaque loading optimization for improved dose conformity and homogeneity.

    PubMed

    Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J

    2012-09-01

    To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.

  14. A Framework for the Analysis of the Reserve Officer Augmentation Process in the United States Marine Corps

    DTIC Science & Technology

    1987-12-01

    occupation group, category (i.e., strength, loss, etc.), years of commissioned service (YCS), grade, occupation, source of commission, education, sex ...OF MCORP OUTPUT OCCUPATION GROUP: All CAT: Strength YCS: 01 - 09 GRADE: All Unrestricted Officers OCCUPATION: All SOURCE: All EDUCATION: All SEX : All...source of commission, sex , MOS, GCT, and other pertinent variables such as the performance index. A Probit or Logit model could be utilized. The variables

  15. Characterization of undrained shear strength profiles for soft clays at six sites in Texas.

    DOT National Transportation Integrated Search

    2009-01-01

    TxDOT frequently uses Texas Cone Penetrometer (TCP) blow counts to estimate undrained shear strength. : However, the current correlations between TCP resistance and undrained shear strength have been developed primarily for : significantly stronger s...

  16. A field like today's? The strength of the geomagnetic field 1.1 billion years ago

    NASA Astrophysics Data System (ADS)

    Sprain, Courtney J.; Swanson-Hysell, Nicholas L.; Fairchild, Luke M.; Gaastra, Kevin

    2018-06-01

    Palaeomagnetic data from ancient rocks are one of the few types of observational data that can be brought to bear on the long-term evolution of Earth's core. A recent compilation of palaeointensity estimates from throughout Earth history has been interpreted to indicate that Earth's magnetic field strength increased in the Mesoproterozoic (between 1.5 and 1.0 billion years ago), with this increase taken to mark the onset of inner core nucleation. However, much of the data within the Precambrian palaeointensity database are from Thellier-style experiments with non-ideal behaviour that manifests in results such as double-slope Arai plots. Choices made when interpreting these data may significantly change conclusions about long-term trends in the intensity of Earth's geomagnetic field. In this study, we present new palaeointensity results from volcanics of the ˜1.1-billion-year-old North American Midcontinent Rift. While most of the results exhibit non-ideal double-slope or sagging behaviour in Arai plots, some flows have more ideal single-slope behaviour leading to palaeointensity estimates that may be some of the best constraints on the strength of Earth's field for this time. Taken together, new and previously published palaeointensity data from the Midcontinent Rift yield a median field strength estimate of 56.0 ZAm2—very similar to the median for the past 300 Myr. These field strength estimates are distinctly higher than those for the preceding billion years (Ga) after excluding ca. 1.3 Ga data that may be biased by non-ideal behaviour—consistent with an increase in field strength in the late Mesoproterozoic. However, given that ˜90 per cent of palaeointensity estimates from 1.1 to 0.5 Ga come from the Midcontinent Rift, it is difficult to evaluate whether these high values relative to those estimated for the preceding billion years are the result of a stepwise, sustained increase in dipole moment. Regardless, palaeointensity estimates from the Midcontinent Rift indicate that the surface expression of Earth's geomagnetic field at ˜1.1 Ga may have been similar to that on the present-day Earth.

  17. A field like today's? The strength of the geomagnetic field 1.1 billion years ago

    NASA Astrophysics Data System (ADS)

    Sprain, Courtney J.; Swanson-Hysell, Nicholas L.; Fairchild, Luke M.; Gaastra, Kevin

    2018-02-01

    Paleomagnetic data from ancient rocks are one of the few types of observational data that can be brought to be bear on the long-term evolution of Earth's core. A recent compilation of paleointensity estimates from throughout Earth history has been interpreted to indicate that Earth's magnetic field strength increased in the Mesoproterozoic (between 1.5 and 1.0 billion years ago), with this increase taken to mark the onset of inner core nucleation. However, much of the data within the Precambrian paleointensity database are from Thellier-style experiments with non-ideal behavior that manifests in results such as double-slope Arai plots. Choices made when interpreting these data may significantly change conclusions about long-term trends in the intensity of Earth's geomagnetic field. In this study, we present new paleointensity results from volcanics of the ˜1.1 billion-year-old North American Midcontinent Rift. While most of the results exhibit non-ideal double-slope or sagging behavior in Arai plots, some flows have more ideal single-slope behavior leading to paleointensity estimates that may be some of the best constraints on the strength of Earth's field for this time. Taken together, new and previously published paleointensity data from the Midcontinent Rift yield a median field strength estimate of 56.0 ZAm2—very similar to the median for the past 300 million years. These field strength estimates are distinctly higher than those for the preceding billion years after excluding ca. 1.3 Ga data that may be biased by non-ideal behavior—consistent with an increase in field strength in the late Mesoproterozoic. However, given that ˜90 per cent of paleointensity estimates from 1.1 to 0.5 Ga come from the Midcontinent Rift, it is difficult to evaluate whether these high values relative to those estimated for the preceding billion years are the result of a stepwise, sustained increase in dipole moment. Regardless, paleointensity estimates from the Midcontinent Rift indicate that the surface expression of Earth's geomagnetic field at ˜1.1 Ga may have been similar to that on the present-day Earth.

  18. A comparison of DXA and CT based methods for estimating the strength of the femoral neck in post-menopausal women

    PubMed Central

    Danielson, Michelle E.; Beck, Thomas J.; Karlamangla, Arun S.; Greendale, Gail A.; Atkinson, Elizabeth J.; Lian, Yinjuan; Khaled, Alia S.; Keaveny, Tony M.; Kopperdahl, David; Ruppert, Kristine; Greenspan, Susan; Vuga, Marike; Cauley, Jane A.

    2013-01-01

    Purpose Simple 2-dimensional (2D) analyses of bone strength can be done with dual energy x-ray absorptiometry (DXA) data and applied to large data sets. We compared 2D analyses to 3-dimensional (3D) finite element analyses (FEA) based on quantitative computed tomography (QCT) data. Methods 213 women participating in the Study of Women’s Health across the Nation (SWAN) received hip DXA and QCT scans. DXA BMD and femoral neck diameter and axis length were used to estimate geometry for composite bending (BSI) and compressive strength (CSI) indices. These and comparable indices computed by Hip Structure Analysis (HSA) on the same DXA data were compared to indices using QCT geometry. Simple 2D engineering simulations of a fall impacting on the greater trochanter were generated using HSA and QCT femoral neck geometry; these estimates were benchmarked to a 3D FEA of fall impact. Results DXA-derived CSI and BSI computed from BMD and by HSA correlated well with each other (R= 0.92 and 0.70) and with QCT-derived indices (R= 0.83–0.85 and 0.65–0.72). The 2D strength estimate using HSA geometry correlated well with that from QCT (R=0.76) and with the 3D FEA estimate (R=0.56). Conclusions Femoral neck geometry computed by HSA from DXA data corresponds well enough to that from QCT for an analysis of load stress in the larger SWAN data set. Geometry derived from BMD data performed nearly as well. Proximal femur breaking strength estimated from 2D DXA data is not as well correlated with that derived by a 3D FEA using QCT data. PMID:22810918

  19. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.

    PubMed

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.

  20. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  1. Investigation of low compressive strengths of concrete in paving, precast and structural concrete

    DOT National Transportation Integrated Search

    2000-08-01

    This research examines the causes for a high incidence of catastrophically low compressive strengths, primarily on structural concrete, during the 1997 construction season. The source for the low strengths was poor aggregate-paste bond associated wit...

  2. Cues of upper body strength account for most of the variance in men's bodily attractiveness.

    PubMed

    Sell, Aaron; Lukazsweski, Aaron W; Townsley, Michael

    2017-12-20

    Evolution equips sexually reproducing species with mate choice mechanisms that function to evaluate the reproductive consequences of mating with different individuals. Indeed, evolutionary psychologists have shown that women's mate choice mechanisms track many cues of men's genetic quality and ability to invest resources in the woman and her offspring. One variable that predicted both a man's genetic quality and his ability to invest is the man's formidability (i.e. fighting ability or resource holding power/potential). Modern women, therefore, should have mate choice mechanisms that respond to ancestral cues of a man's fighting ability. One crucial component of a man's ability to fight is his upper body strength. Here, we test how important physical strength is to men's bodily attractiveness. Three sets of photographs of men's bodies were shown to raters who estimated either their physical strength or their attractiveness. Estimates of physical strength determined over 70% of men's bodily attractiveness. Additional analyses showed that tallness and leanness were also favoured, and, along with estimates of physical strength, accounted for 80% of men's bodily attractiveness. Contrary to popular theories of men's physical attractiveness, there was no evidence of a nonlinear effect; the strongest men were the most attractive in all samples. © 2017 The Author(s).

  3. Estimating ionospheric currents by inversion from ground-based geomagnetic data and calculating geoelectric fields for studies of geomagnetically induced currents

    NASA Astrophysics Data System (ADS)

    de Villiers, J. S.; Pirjola, R. J.; Cilliers, P. J.

    2016-09-01

    This research focuses on the inversion of geomagnetic variation field measurements to obtain the source currents in the ionosphere and magnetosphere, and to determine the geoelectric fields at the Earth's surface. During geomagnetic storms, the geoelectric fields create geomagnetically induced currents (GIC) in power networks. These GIC may disturb the operation of power systems, cause damage to power transformers, and even result in power blackouts. In this model, line currents running east-west along given latitudes are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground being composed of a zero magnetic east component and a nonzero electric east component. The line current parameters are estimated by inverting Fourier integrals (over wavenumber) of elementary geomagnetic fields using the Levenberg-Marquardt technique. The output parameters of the model are the ionospheric current strength and the geoelectric east component at the Earth's surface. A conductivity profile of the Earth is adapted from a shallow layered-Earth model for one observatory, together with a deep-layer model derived from satellite observations. This profile is used to obtain the ground surface impedance and therefore the reflection coefficient in the integrals. The inputs for the model are a spectrum of the geomagnetic data for 31 May 2013. The output parameters of the model are spectrums of the ionospheric current strength and of the surface geoelectric field. The inverse Fourier transforms of these spectra provide the time variations on the same day. The geoelectric field data can be used as a proxy for GIC in the prediction of GIC for power utilities. The current strength data can assist in the interpretation of upstream solar wind behaviour.

  4. Truncation of the Accretion Disk at One-third of the Eddington Limit in the Neutron Star Low-mass X-Ray Binary Aquila X-1

    NASA Astrophysics Data System (ADS)

    Ludlam, R. M.; Miller, J. M.; Degenaar, N.; Sanna, A.; Cackett, E. M.; Altamirano, D.; King, A. L.

    2017-10-01

    We perform a reflection study on a new observation of the neutron star (NS) low-mass X-ray binary Aquila X-1 taken with NuSTAR during the 2016 August outburst and compare with the 2014 July outburst. The source was captured at ˜32% L Edd, which is over four times more luminous than the previous observation during the 2014 outburst. Both observations exhibit a broadened Fe line profile. Through reflection modeling, we determine that the inner disk is truncated {R}{in,2016}={11}-1+2 {R}g (where R g = GM/c 2) and {R}{in,2014}=14+/- 2 {R}g (errors quoted at the 90% confidence level). Fiducial NS parameters (M NS = 1.4 M ⊙, R NS = 10 km) give a stellar radius of R NS = 4.85 R g ; our measurements rule out a disk extending to that radius at more than the 6σ level of confidence. We are able to place an upper limit on the magnetic field strength of B ≤ 3.0-4.5 × 109 G at the magnetic poles, assuming that the disk is truncated at the magnetospheric radius in each case. This is consistent with previous estimates of the magnetic field strength for Aquila X-1. However, if the magnetosphere is not responsible for truncating the disk prior to the NS surface, we estimate a boundary layer with a maximum extent of {R}{BL,2016}˜ 10 {R}g and {R}{BL,2014}˜ 6 {R}g. Additionally, we compare the magnetic field strength inferred from the Fe line profile of Aquila X-1 and other NS low-mass X-ray binaries to known accreting millisecond X-ray pulsars.

  5. Source Parameters and High Frequency Characteristics of Local Events (0.5 ≤ M L ≤ 2.9) Around Bilaspur Region of the Himachal Himalaya

    NASA Astrophysics Data System (ADS)

    Vandana; Kumar, Ashwani; Gupta, S. C.; Mishra, O. P.; Kumar, Arjun; Sandeep

    2017-04-01

    Source parameters of 41 local events (0.5 ≤ M L ≤ 2.9) occurred around Bilaspur region of the Himachal Lesser Himalaya from May 2013 to March 2014 have been estimated adopting Brune model. The estimated source parameters include seismic moments ( M o), source radii ( r), and stress drops (Δ σ), and found to vary from 4.9 × 1019 to 7 × 1021 dyne-cm, about 187-518 m and less than 1 bar to 51 bars, respectively. The decay of high frequency acceleration spectra at frequencies above f max has been modelled using two functions: a high-cut filter and κ factor. Stress drops of 11 events, with M 0 between 1 × 1021 and 7 × 1021 dyne-cm, vary from 11 bars to 51 bars with an average of 22 bars. From the variation of the maximum stress drop with focal depth it appears that the strength of the upper crust decreases below 20 km. A scaling law M 0 = 2 × 1022 f c -3.03 between M 0, and corner frequency (f c), has been developed for the region. This law almost agrees with that for the Kameng region of the Arunachal Lesser Himalaya. f c is found to be source dependent whereas f max is source independent and seems to indicate that the size of the cohesive zone is not sensitive to the earthquake size. At four sites f max is found to vary from 14 to 23, 11 to 19, 9 to 23 and 4 to 11 Hz, respectively. The κ is found to vary from 0.01 to 0.035 s with an average of 0.02 s. This range of variation is a large compared to the κ variation between 0.023 and 0.07 s for the Garhwal and Kumaon Himalaya. For various regions of the world, the κ varies over a broad range from 0.003 to 0.08 s, and for the Bilaspur region the κ estimates are found to be consistent with other regions of the world.

  6. Estimates of the effective compressive strength

    NASA Astrophysics Data System (ADS)

    Goldstein, R. V.; Osipenko, N. M.

    2017-07-01

    One problem encountered when determining the effective mechanical properties of large-scale objects, which requires calculating their strength in processes of mechanical interaction with other objects, is related to the possible variability in their local properties including those due to the action of external physical factors. Such problems comprise the determination of the effective strength of bodies one of whose dimensions (thickness) is significantly less than the others and whose properties and/or composition can vary with the thickness. A method for estimating the effective strength of such bodies is proposed and illustrated with example of ice cover strength under longitudinal compression with regard to a partial loss of the ice bearing capacity in deformation. The role of failure localization processes is shown. It is demonstrated that the proposed approach can be used in other problems of fracture mechanics.

  7. Dark production of carbon monoxide (CO) from dissolved organic matter in the St. Lawrence estuarine system: Implication for the global coastal and blue water CO budgets

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Xie, Huixiang; Fichot, CéDric G.; Chen, Guohua

    2008-12-01

    We investigated the thermal (dark) production of carbon monoxide (CO) from dissolved organic matter (DOM) in the water column of the St. Lawrence estuarine system in spring 2007. The production rate, Qco, decreased seaward horizontally and downward vertically. Qco exhibited a positive, linear correlation with the abundance of chromophoric dissolved organic matter (CDOM). Terrestrial DOM was more efficient at producing CO than marine DOM. The temperature dependence of Qco can be characterized by the Arrhenius equation with the activation energies of freshwater samples being higher than those of salty samples. Qco remained relatively constant between pH 4-6, increased slowly between pH 6-8 and then rapidly with further rising pH. Ionic strength and iron chemistry had little influence on Qco. An empirical equation, describing Qco as a function of CDOM abundance, temperature, pH, and salinity, was established to evaluate CO dark production in the global coastal waters (depth < 200 m). The total coastal CO dark production from DOM was estimated to be from 0.46 to 1.50 Tg CO-C a-1 (Tg carbon from CO a-1). We speculated the global oceanic (coastal plus open ocean) CO dark production to be in the range from 4.87 to 15.8 Tg CO-C a-1 by extrapolating the coastal water-based results to blue waters (depth > 200 m). Both the coastal and global dark source strengths are significant compared to the corresponding photochemical CO source strengths (coastal: ˜2.9 Tg CO-C a-1; global: ˜50 Tg CO-C a-1). Steady state deepwater CO concentrations inferred from Qco and microbial CO uptake rates are <0.1 nmol L-1.

  8. Analysis of soybean leaf metabolism and seed coat transcriptome reveal sink strength is maintained under abiotic stress conditions

    USDA-ARS?s Scientific Manuscript database

    The seed coat is a vital tissue for directing the flow of photosynthate from source leaves to the embryo and cotyledons during seed development. By forming a sucrose gradient, the seed coat promotes transport of sugars from source leaves to seeds, thereby establishing sink strength. Understanding th...

  9. Ultrasound acoustic wave energy transfer and harvesting

    NASA Astrophysics Data System (ADS)

    Shahab, Shima; Leadenham, Stephen; Guillot, François; Sabra, Karim; Erturk, Alper

    2014-04-01

    This paper investigates low-power electricity generation from ultrasound acoustic wave energy transfer combined with piezoelectric energy harvesting for wireless applications ranging from medical implants to naval sensor systems. The focus is placed on an underwater system that consists of a pulsating source for spherical wave generation and a harvester connected to an external resistive load for quantifying the electrical power output. An analytical electro-acoustic model is developed to relate the source strength to the electrical power output of the harvester located at a specific distance from the source. The model couples the energy harvester dynamics (piezoelectric device and electrical load) with the source strength through the acoustic-structure interaction at the harvester-fluid interface. Case studies are given for a detailed understanding of the coupled system dynamics under various conditions. Specifically the relationship between the electrical power output and system parameters, such as the distance of the harvester from the source, dimensions of the harvester, level of source strength, and electrical load resistance are explored. Sensitivity of the electrical power output to the excitation frequency in the neighborhood of the harvester's underwater resonance frequency is also reported.

  10. The source of the intermediate wavelength component of the Earth's magnetic field

    NASA Technical Reports Server (NTRS)

    Harrison, C. G. A.

    1985-01-01

    The intermediate wavelength component of the Earth's magnetic field has been well documented by observations made by MAGSAT. It has been shown that some significant fraction of this component is likely to be caused within the core of the Earth. Evidence for this comes from analysis of the intermediate wavelength component revealed by spherical harmonics between degrees 14 and 23, in which it is shown that it is unlikely that all of this signal is crustal. Firstly, there is no difference between average continental source strength and average oceanic source strength, which is unlikely to be the case if the anomalies reside within the crust, taking into account the very different nature and thickness of continental and oceanic crust. Secondly, there is almost no latitudinal variation in the source strength, which is puzzling if the sources are within the crust and have been formed by present or past magnetic fields with a factor of two difference in intensity between the equator and the poles. If however most of the sources for this field reside within the core, then these observations are not very surprising.

  11. Behavioral and Emotional Strengths among Youth in Systems of Care and the Effect of Race/Ethnicity

    ERIC Educational Resources Information Center

    Barksdale, Crystal L.; Azur, Melissa; Daniels, Amy M.

    2010-01-01

    Behavioral and emotional strengths are important to consider when understanding youth mental health and treatment. This study examined the association between youth strengths and functional impairment and whether this association is modified by race/ethnicity. Multinomial logistic regression models were used to estimate the effects of strengths on…

  12. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less

  13. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  14. Nanoindentation cannot accurately predict the tensile strength of graphene or other 2D materials

    NASA Astrophysics Data System (ADS)

    Han, Jihoon; Pugno, Nicola M.; Ryu, Seunghwa

    2015-09-01

    Due to the difficulty of performing uniaxial tensile testing, the strengths of graphene and its grain boundaries have been measured in experiments by nanoindentation testing. From a series of molecular dynamics simulations, we find that the strength measured in uniaxial simulation and the strength estimated from the nanoindentation fracture force can differ significantly. Fracture in tensile loading occurs simultaneously with the onset of crack nucleation near 5-7 defects, while the graphene sheets often sustain the indentation loads after the crack initiation because the sharply concentrated stress near the tip does not give rise to enough driving force for further crack propagation. Due to the concentrated stress, strength estimation is sensitive to the indenter tip position along the grain boundaries. Also, it approaches the strength of pristine graphene if the tip is located slightly away from the grain boundary line. Our findings reveal the limitations of nanoindentation testing in quantifying the strength of graphene, and show that the loading-mode-specific failure mechanism must be taken into account in designing reliable devices from graphene and other technologically important 2D materials.Due to the difficulty of performing uniaxial tensile testing, the strengths of graphene and its grain boundaries have been measured in experiments by nanoindentation testing. From a series of molecular dynamics simulations, we find that the strength measured in uniaxial simulation and the strength estimated from the nanoindentation fracture force can differ significantly. Fracture in tensile loading occurs simultaneously with the onset of crack nucleation near 5-7 defects, while the graphene sheets often sustain the indentation loads after the crack initiation because the sharply concentrated stress near the tip does not give rise to enough driving force for further crack propagation. Due to the concentrated stress, strength estimation is sensitive to the indenter tip position along the grain boundaries. Also, it approaches the strength of pristine graphene if the tip is located slightly away from the grain boundary line. Our findings reveal the limitations of nanoindentation testing in quantifying the strength of graphene, and show that the loading-mode-specific failure mechanism must be taken into account in designing reliable devices from graphene and other technologically important 2D materials. Electronic ESI (ESI) available: Modelling of polycrystalline graphene, verification of loading speed, biaxial tensile simulations, comparison of stress distribution, size effects of indenter radius, force-deflection curves, and stability analysis of crack propagation. See DOI: 10.1039/c5nr04134a

  15. Cortical Reorganisation during a 30-Week Tinnitus Treatment Program

    PubMed Central

    McMahon, Catherine M.; Ibrahim, Ronny K.; Mathur, Ankit

    2016-01-01

    Subjective tinnitus is characterised by the conscious perception of a phantom sound. Previous studies have shown that individuals with chronic tinnitus have disrupted sound-evoked cortical tonotopic maps, time-shifted evoked auditory responses, and altered oscillatory cortical activity. The main objectives of this study were to: (i) compare sound-evoked brain responses and cortical tonotopic maps in individuals with bilateral tinnitus and those without tinnitus; and (ii) investigate whether changes in these sound-evoked responses occur with amelioration of the tinnitus percept during a 30-week tinnitus treatment program. Magnetoencephalography (MEG) recordings of 12 bilateral tinnitus participants and 10 control normal-hearing subjects reporting no tinnitus were recorded at baseline, using 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tones presented monaurally at 70 dBSPL through insert tube phones. For the tinnitus participants, MEG recordings were obtained at 5-, 10-, 20- and 30- week time points during tinnitus treatment. Results for the 500 Hz and 1000 Hz sources (where hearing thresholds were within normal limits for all participants) showed that the tinnitus participants had a significantly larger and more anteriorly located source strengths when compared to the non-tinnitus participants. During the 30-week tinnitus treatment, the participants’ 500 Hz and 1000 Hz source strengths remained higher than the non-tinnitus participants; however, the source locations shifted towards the direction recorded from the non-tinnitus control group. Further, in the left hemisphere, there was a time-shifted association between the trajectory of change of the individual’s objective (source strength and anterior-posterior source location) and subjective measures (using tinnitus reaction questionnaire, TRQ). The differences in source strength between the two groups suggest that individuals with tinnitus have enhanced central gain which is not significantly influenced by the tinnitus treatment, and may result from the hearing loss per se. On the other hand, the shifts in the tonotopic map towards the non-tinnitus participants’ source location suggests that the tinnitus treatment might reduce the disruptions in the map, presumably produced by the tinnitus percept directly or indirectly. Further, the similarity in the trajectory of change across the objective and subjective parameters after time-shifting the perceptual changes by 5 weeks suggests that during or following treatment, perceptual changes in the tinnitus percept may precede neurophysiological changes. Subgroup analyses conducted by magnitude of hearing loss suggest that there were no differences in the 500 Hz and 1000 Hz source strength amplitudes for the mild-moderate compared with the mild-severe hearing loss subgroup, although the mean source strength was consistently higher for the mild-severe subgroup. Further, the mild-severe subgroup had 500 Hz and 1000 Hz source locations located more anteriorly (i.e., more disrupted compared to the control group) compared to the mild-moderate group, although this was trending towards significance only for the 500Hz left hemisphere source. While the small numbers of participants within the subgroup analyses reduce the statistical power, this study suggests that those with greater magnitudes of hearing loss show greater cortical disruptions with tinnitus and that tinnitus treatment appears to reduce the tonotopic map disruptions but not the source strength (or central gain). PMID:26901425

  16. The Characteristics of Electromagnetic Fields Induced by Different Type Sources

    NASA Astrophysics Data System (ADS)

    Di, Q.; Fu, C.; Wang, R.; Xu, C.; An, Z.

    2011-12-01

    Controlled source audio-frequence magnetotelluric (CSAMT) method has played an important role in the shallow exploration (less than 1.5km) in the field of resources, environment and engineering geology. In order to prospect the deeper target, one has to increase the strength of the source and offset. However, the exploration is nearly impossible for the heavy larger power transmitting source used in the deeper prospecting and mountain area. So an EM method using a fixed large power source, such as long bipole current source, two perpendicular "L" shape long bipole current source and large radius circle current source, is beginning to take shape. In order to increase the strength of the source, the length of the transmitting bipole in one direction or in perpendicular directions has to be much larger, such as L=100km, or the radius of the circle current source is much larger. The electric field strength are IL2and IL2/4π separately for long bipole source and circle current source with the same wire length. Just considering the effectiveness of source, the strength of the circle current source is larger than that of long bipole source if is large enough. However, the strength of the electromagnetic signal doesn't totally depend on the transmitting source, the effect of ionosphere on the electromagnetic (EM) field should be considered when observation is carried at a very far (about several thousands kilometers) location away from the source for the long bipole source or the large radius circle current source. We firstly calculate the electromagnetic fields with the traditional controlled source (CSEM) configuration using the integral equation (IE) code developed by our research group for a three layers earth-ionosphere model which consists of ionosphere, atmosphere and earth media. The modeling results agree well with the half space analytical results because the effect of ionosphere for this small scale source can be ignorable, which means the integral equation method is reliable and effective for modeling models including ionosphere, atmosphere and earth media. In order to discuss EM fields' characters for complicate earth-ionosphere media excited by long bipole, "L" shape bipole and circle current sources in the far-field and wave-guide zones, we modeled the frequency responses and decay characters of EM fields for three layers earth-ionosphere model. Because of the effect of ionosphere, the earth-ionosphere electromagnetic fields' decay curves with given frequency show that the fields of Ex and Hy , excited by a long bipole and "L" shape bipole, can be divided into an extra wave-guide field with slower attenuation and strong amplititude than that in half space, but the EM fields of circle current source does not show the same characteristics, ionosphere makes the amplitude of the EM field weaker for the circle current source. For this reason, it is better to use long bipole source while working in the wave-guide field with a fixed large power source.

  17. The strength of the meridional overturning circulation of the stratosphere

    PubMed Central

    Linz, Marianna; Plumb, R. Alan; Gerber, Edwin P.; Haenel, Florian J.; Stiller, Gabriele; Kinnison, Douglas E.; Ming, Alison; Neu, Jessica L.

    2017-01-01

    The distribution of gases such as ozone and water vapour in the stratosphere — which affect surface climate — is influenced by the meridional overturning of mass in the stratosphere, the Brewer–Dobson circulation. However, observation-based estimates of its global strength are difficult to obtain. Here we present two calculations of the mean strength of the meridional overturning of the stratosphere. We analyze satellite data that document the global diabatic circulation between 2007– 2011, and compare these to three re-analysis data sets and to simulations with a state-of-the-art chemistry-climate model. Using measurements of sulfur hexafluoride (SF6) and nitrous oxide, we calculate the global mean diabatic overturning mass flux throughout the stratosphere. In the lower stratosphere, these two estimates agree, and at a potential temperature level of 460 K (about 20 km or 60 hPa in tropics), the global circulation strength is 6.3–7.6 × 109 kg/s. Higher in the atmosphere, only the SF6-based estimate is available, and it diverges from the re-analysis data and simulations. Interpretation of the SF6 data-based estimate is limited because of a mesospheric sink of SF6; however, the reanalyses also differ substantially from each other. We conclude that the uncertainty in the mean meridional overturning circulation strength at upper levels of the stratosphere amounts to at least 100 %. PMID:28966661

  18. Measuring Radiofrequency and Microwave Radiation from Varying Signal Strengths

    NASA Technical Reports Server (NTRS)

    Davis, Bette; Gaul, W. C.

    2007-01-01

    This viewgraph presentation discusses the process of measuring radiofrequency and microwave radiation from various signal strengths. The topics include: 1) Limits and Guidelines; 2) Typical Variable Standard (IEEE) Frequency Dependent; 3) FCC Standard 47 CFR 1.1310; 4) Compliance Follows Unity Rule; 5) Multiple Sources Contribute; 6) Types of RF Signals; 7) Interfering Radiations; 8) Different Frequencies Different Powers; 9) Power Summing - Peak Power; 10) Contribution from Various Single Sources; 11) Total Power from Multiple Sources; 12) Are You Out of Compliance?; and 13) In Compliance.

  19. Contribution of indoor-generated particles to residential exposure

    NASA Astrophysics Data System (ADS)

    Isaxon, C.; Gudmundsson, A.; Nordin, E. Z.; Lönnblad, L.; Dahl, A.; Wieslander, G.; Bohgard, M.; Wierzbicka, A.

    2015-04-01

    The majority of airborne particles in residences, when expressed as number concentrations, are generated by the residents themselves, through combustion/thermal related activities. These particles have a considerably smaller diameter than 2.5 μm and, due to the combination of their small size, chemical composition (e.g. soot) and intermittently very high concentrations, should be regarded as having potential to cause adverse health effects. In this study, time resolved airborne particle measurements were conducted for seven consecutive days in 22 randomly selected homes in the urban area of Lund in southern Sweden. The main purpose of the study was to analyze the influence of human activities on the concentration of particles in indoor air. Focus was on number concentrations of particles with diameters <300 nm generated by indoor activities, and how these contribute to the integrated daily residential exposure. Correlations between these particles and soot mass concentration in total dust were also investigated. It was found that candle burning and activities related to cooking (using a frying pan, oven, toaster, and their combinations) were the major particle sources. The frequency of occurrence of a given concentration indoors and outdoors was compared for ultrafine particles. Indoor data was sorted into non-occupancy and occupancy time, and the occupancy time was further divided into non-activity and activity influenced time. It was found that high levels (above 104 cm-3) indoors mainly occur during active periods of occupancy, while the concentration during non-activity influenced time differs very little from non-occupancy time. Total integrated daily residential exposure of ultrafine particles was calculated for 22 homes, the contribution from known activities was 66%, from unknown activities 20%, and from background/non-activity 14%. The collected data also allowed for estimates of particle source strengths for specific activities, and for some activities it was possible to estimate correlations between the number concentration of ultrafine particles and the mass concentration of soot in total dust in 10 homes. Particle source strengths (for 7 specific activities) ranged from 1.6·1012 to 4.5·1012 min-1. The correlation between ultrafine particles and mass concentration of soot in total dust varied between 0.37 and 0.85, with an average of 0.56 (Pearson correlation coefficient). This study clearly shows that due to the importance of indoor sources, residential exposure to ultrafine particles cannot be characterized by ambient measurements alone.

  20. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and temporal distributions, and objectively constrains site level estimates by applying uniform selection requirements on measurement level data. (1) Lawrence, K.P., L. Tauxe, H. Staudigel, C.G. Constable, A. Koppers, W. McIntosh, C.L. Johnson, Paleomagnetic field properties at high southern latitude, Geochemistry Geophysics Geosystems, 10, 2009. (2) Selkin, P.A., L. Tauxe, Long-term variations in palaeointensity, Phil. Trans. R. Soc. Lond., 358, 1065-1088, 2000. (3) Shaar, R., L. Tauxe, Thellier GUI: An integrated tool for analyzing paleointensity data from Thellier-type experiments, Geochemistry Geophysics Geosystems, 14, 2013

  1. Unequal-strength source zROC slopes reflect criteria placement and not (necessarily) memory processes

    PubMed Central

    Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.

    2014-01-01

    Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose two decision mechanisms that can produce the slope effect, and we test them in three experiments. The evidence mixing account assumes that people change how they weight item versus source evidence based on which source is stronger, and the converging criteria account assumes that participants become more willing to make high confidence source responses for test probes that have higher item strength. Results failed to support the evidence mixing account, in that the slope effect emerged even when item evidence was not informative for the source judgment (that is, in tests that included strong and weak items from both sources). In contrast, results showed strong support for the converging criteria account. This account not only accommodated the unequal-strength slope effect, but also made a prediction for unstudied (new) items that was empirically confirmed: participants made more high confidence source responses for new items when they were more confident that the item was studied. The converging criteria account has an advantage over accounts based on source recollection or evidence variability, as the latter accounts do not predict the relationship between recognition and source confidence for new items. PMID:23565789

  2. The solar wind as a possible source of fast temporal variations of the heliospheric ribbon

    DOE PAGES

    Kucharek, H.; Fuselier, S. A.; Wurz, P.; ...

    2013-10-04

    Here we present a possible source of pickup ions (PUIs) the ribbon observed by the Interstellar Boundary EXplorer (IBEX). We suggest that a gyrating solar wind and PUIs in the ramp and in the near downstream region of the termination shock (TS) could provide a significant source of energetic neutral atoms (ENAs) in the ribbon. A fraction of the solar wind and PUIs are reflected and energized during the first contact with the TS. Some of the solar wind may be reflected propagating toward the Sun but most of the solar wind ions form a gyrating beam-like distribution that persistsmore » until it is fully thermalized further downstream. Depending on the strength of the shock, these gyrating distributions can exist for many gyration periods until they are scattered/thermalized due to wave-particle interactions at the TS and downstream in the heliosheath. During this time, ENAs can be produced by charge exchange of interstellar neutral atoms with the gyrating ions. In order to determine the flux of energetic ions, we estimate the solar wind flux at the TS using pressure estimates inferred from in situ measurements. Assuming an average path length in the radial direction of the order of a few AU before the distribution of gyrating ions is thermalized, one can explain a significant fraction of the intensity of ENAs in the ribbon observed by IBEX. In conclusion, with a localized source and such a short integration path, this model would also allow fast time variations of the ENA flux.« less

  3. Strength Estimation for Hydrate-Bearing Sediments From Direct Shear Tests of Hydrate-Bearing Sand and Silt

    NASA Astrophysics Data System (ADS)

    Liu, Zhichao; Dai, Sheng; Ning, Fulong; Peng, Li; Wei, Houzhen; Wei, Changfu

    2018-01-01

    Safe and economic methane gas production, as well as the replacement of methane while sequestering carbon in natural hydrate deposits, requires enhanced geomechanical understanding of the strength and volume responses of hydrate-bearing sediments during shear. This study employs a custom-made apparatus to investigate the mechanical and volumetric behaviors of carbon dioxide hydrate-bearing sediments subjected to direct shear. The results show that both peak and residual strengths increase with increased hydrate saturation and vertical stress. Hydrate contributes mainly the cohesion and dilatancy constraint to the peak strength of hydrate-bearing sediments. The postpeak strength reduction is more evident and brittle in specimens with higher hydrate saturation and under lower stress. Significant strength reduction after shear failure is expected in silty sediments with high hydrate saturation Sh ≥ 0.65. Hydrate contribution to the residual strength is mainly by increasing cohesion at low hydrate saturation and friction at high hydrate saturation. Stress state and hydrate saturation are dominating both the stiffness and the strength of hydrate-bearing sediments; thus, a wave velocity-based peak strength prediction model is proposed and validated, which allows for precise estimation of the shear strength of hydrate-bearing sediments through acoustic logging data. This method is advantageous to geomechanical simulators, particularly when the experimental strength data of natural samples are not available.

  4. The utility of medico-legal databases for public health research: a systematic review of peer-reviewed publications using the National Coronial Information System.

    PubMed

    Bugeja, Lyndal; Ibrahim, Joseph E; Ferrah, Noha; Murphy, Briony; Willoughby, Melissa; Ranson, David

    2016-04-12

    Medico-legal death investigations are a recognised data source for public health endeavours and its accessibility has increased following the development of electronic data systems. Despite time and cost savings, the strengths and limitations of this method and impact on research findings remain untested. This study examines this issue using the National Coronial Information System (NCIS). PubMed, ProQuest and Informit were searched to identify publications where the NCIS was used as a data source for research published during the period 2000-2014. A descriptive analysis was performed to describe the frequency and characteristics of the publications identified. A content analysis was performed to identify the nature and impact of strengths and limitations of the NCIS as reported by researchers. Of the 106 publications included, 30 reported strengths and limitations, 37 reported limitations only, seven reported strengths only and 32 reported neither. The impact of the reported strengths of the NCIS was described in 14 publications, whilst 46 publications discussed the impacts of limitations. The NCIS was reported to be a reliable source of quality, detailed information with comprehensive coverage of deaths of interest, making it a powerful injury surveillance tool. Despite these strengths, researchers reported that open cases and missing information created the potential for selection and reporting biases and may preclude the identification and control of confounders. To ensure research results are valid and inform health policy, it is essential to consider and seek to overcome the limitations of data sources that may have an impact on results.

  5. Borrowing of strength and study weights in multivariate and network meta-analysis.

    PubMed

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2017-12-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of 'borrowing of strength'. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis).

  6. Morphology and the Strength of Intermolecular Contact in Protein Crystals

    NASA Technical Reports Server (NTRS)

    Matsuura, Yoshiki; Chernov, Alexander A.

    2002-01-01

    The strengths of intermolecular contacts (macrobonds) in four lysozyme crystals were estimated based on the strengths of individual intermolecular interatomic interaction pairs. The periodic bond chain of these macrobonds accounts for the morphology of protein crystals as shown previously. Further in this paper, the surface area of contact, polar coordinate representation of contact site, Coulombic contribution on the macrobond strength, and the surface energy of the crystal have been evaluated. Comparing location of intermolecular contacts in different polymorphic crystal modifications, we show that these contacts can form a wide variety of patches on the molecular surface. The patches are located practically everywhere on this surface except for the concave active site. The contacts frequently include water molecules, with specific intermolecular hydrogen-bonds on the background of non-specific attractive interactions. The strengths of macrobonds are also compared to those of other protein complex systems. Making use of the contact strengths and taking into account bond hydration we also estimated crystal-water interfacial energies for different crystal faces.

  7. Correct Effect Size Estimates for Strength of Association Statistics: Comment on Odgaard and Fowler (2010)

    ERIC Educational Resources Information Center

    Lerner, Matthew D.; Mikami, Amori Yee

    2013-01-01

    Odgaard and Fowler (2010) articulated the importance of reporting confidence intervals (CIs) on effect size estimates, and they provided useful formulas for doing so. However, one of their reported formulas, pertaining to the calculation of CIs on strength of association effect sizes (e.g., R[squared] or [eta][squared]), is erroneous. This comment…

  8. Chapter 3:Sorting red maple logs for structural quality

    Treesearch

    Xiping Wang

    2005-01-01

    Nondestructive evaluation (NDE) of wood materials has a long history of application in the wood products industry. Visual grading of lumber is perhaps one of the earliest NDE forms. Visual assessment of a piece of lumber requires the grader to estimate a strength ratio on the basis of observed external defects (USDA 1999). The ratio is used to estimate the strength of...

  9. Statistical technique for analysing functional connectivity of multiple spike trains.

    PubMed

    Masud, Mohammad Shahed; Borisyuk, Roman

    2011-03-15

    A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Subduction and volatile recycling in Earth's mantle

    NASA Technical Reports Server (NTRS)

    King, S. D.; Ita, J. J.; Staudigel, H.

    1994-01-01

    The subduction of water and other volatiles into the mantle from oceanic sediments and altered oceanic crust is the major source of volatile recycling in the mantle. Until now, the geotherms that have been used to estimate the amount of volatiles that are recycled at subduction zones have been produced using the hypothesis that the slab is rigid and undergoes no internal deformation. On the other hand, most fluid dynamical mantle flow calculations assume that the slab has no greater strength than the surrounding mantle. Both of these views are inconsistent with laboratory work on the deformation of mantle minerals at high pressures. We consider the effects of the strength of the slab using two-dimensional calculations of a slab-like thermal downwelling with an endothermic phase change. Because the rheology and composition of subducting slabs are uncertain, we consider a range of Clapeyron slopes which bound current laboratory estimates of the spinel to perovskite plus magnesiowustite phase transition and simple temperature-dependent rheologies based on an Arrhenius law diffusion mechanism. In uniform viscosity convection models, subducted material piles up above the phase change until the pile becomes gravitationally unstable and sinks into the lower mantle (the avalanche). Strong slabs moderate the 'catastrophic' effects of the instabilities seen in many constant-viscosity convection calculations; however, even in the strongest slabs we consider, there is some retardation of the slab descent due to the presence of the phase change.

  11. Increased sink strength offsets the inhibitory effect of sucrose on sugarcane photosynthesis.

    PubMed

    Ribeiro, Rafael V; Machado, Eduardo C; Magalhães Filho, José R; Lobo, Ana Karla M; Martins, Márcio O; Silveira, Joaquim A G; Yin, Xinyou; Struik, Paul C

    2017-01-01

    Spraying sucrose inhibits photosynthesis by impairing Rubisco activity and stomatal conductance (g s ), whereas increasing sink demand by partially darkening the plant stimulates sugarcane photosynthesis. We hypothesized that the stimulatory effect of darkness can offset the inhibitory effect of exogenous sucrose on photosynthesis. Source-sink relationship was perturbed in two sugarcane cultivars by imposing partial darkness, spraying a sucrose solution (50mM) and their combination. Five days after the onset of the treatments, the maximum Rubisco carboxylation rate (V cmax ) and the initial slope of A-C i curve (k) were estimated by measuring leaf gas exchange and chlorophyll fluorescence. Photosynthesis was inhibited by sucrose spraying in both genotypes, through decreases in V cmax , k, g s and ATP production driven by electron transport (J atp ). Photosynthesis of plants subjected to the combination of partial darkness and sucrose spraying was similar to photosynthesis of reference plants for both genotypes. Significant increases in V cmax , g s and J atp and marginal increases in k were noticed when combining partial darkness and sucrose spraying compared with sucrose spraying alone. Our data also revealed that increases in sink strength due to partial darkness offset the inhibition of sugarcane photosynthesis caused by sucrose spraying, enhancing the knowledge on endogenous regulation of sugarcane photosynthesis through the source-sink relationship. Copyright © 2016 Elsevier GmbH. All rights reserved.

  12. Performance of improved bacterial cellulose application in the production of functional paper.

    PubMed

    Basta, A H; El-Saied, H

    2009-12-01

    The purpose of this work was to study the feasibility of producing economic flame retardant bacterial cellulose (BC) and evaluating its behaviour in paper production. This type of BC was prepared by Gluconacetobacter subsp. xylinus and substituting the glucose in the cultivation medium by glucose phosphate as a carbon source; as well as using corn steep liquor as a nitrogen source. The investigated processing technique did not dispose any toxic chemicals that pollute the surroundings or cause unacceptable effluents, making the process environmentally safe. The fire retardant behaviour of the investigated BC has been studied by non-isothermal thermogravimetric analysis (TGA & DTGA). The activation energy of each degradation stage and the order of degradation were estimated using the Coats-Redfern equation and the least square method. Strength, optical properties, and thermogravimetric analysis of BC-phosphate added paper sheets were also tested. The study confirmed that the use of glucose phosphate along with glucose was significant in the high yield production of phosphate containing bacterial cellulose (PCBC1); more so than the use of glucose phosphate alone (PCBC2). Incorporating 5% of the PCBC with wood pulp during paper sheet formation was found to significantly improve kaolin retention, strength, and fire resistance properties as compared to paper sheets produced from incorporating bacterial cellulose (BC). This modified BC is a valuable product for the preparation of specialized paper, in addition to its function as a fillers aid.

  13. 10Be constrains the sediment sources and sediment yields to the Great Barrier Reef from the tropical Barron River catchment, Queensland, Australia

    NASA Astrophysics Data System (ADS)

    Nichols, K. K.; Bierman, P. R.; Rood, D. H.

    2014-12-01

    Estimates of long-term, background sediment generation rates place current and future sediment fluxes to the Great Barrier Reef in context. Without reliable estimates of sediment generation rates and without identification of the sources of sediment delivered to the reef prior to European settlement (c. 1850), determining the necessity and effectiveness of contemporary landscape management efforts is difficult. Using the ~2100-km2 Barron River catchment in Queensland, Australia, as a test case, we use in situ-produced 10Be to derive sediment generation rate estimates and use in situ and meteoric 10Be to identify the source of that sediment, which enters the Coral Sea near Cairns. Previous model-based calculations suggested that background sediment yields were up to an order of magnitude lower than contemporary sediment yields. In contrast, in situ 10Be data indicate that background (43 t km-2 y-1) and contemporary sediment yields (~45 t km-2 y-1) for the Barron River are similar. These data suggest that the reef became established in a sediment flux similar to what it receives today. Since western agricultural practices increased erosion rates, large amounts of sediment mobilized from hillslopes during the last century are probably stored in Queensland catchments and will eventually be transported to the coast, most likely in flows triggered by rare but powerful tropical cyclones that were more common before European settlement and may increase in strength as climate change warms the south Pacific Ocean. In situ and meteoric 10Be concentrations of Coral Sea beach sand near Cairns are similar to those in rivers on the Atherton Tablelands, suggesting that most sediment is derived from the extensive, low-gradient uplands rather than the steep, more rapidly eroding but beach proximal escarpment.

  14. Optical sensor of magnetic fields

    DOEpatents

    Butler, M.A.; Martin, S.J.

    1986-03-25

    An optical magnetic field strength sensor for measuring the field strength of a magnetic field comprising a dilute magnetic semi-conductor probe having first and second ends, longitudinally positioned in the magnetic field for providing Faraday polarization rotation of light passing therethrough relative to the strength of the magnetic field. Light provided by a remote light source is propagated through an optical fiber coupler and a single optical fiber strand between the probe and the light source for providing a light path therebetween. A polarizer and an apparatus for rotating the polarization of the light is provided in the light path and a reflector is carried by the second end of the probe for reflecting the light back through the probe and thence through the polarizer to the optical coupler. A photo detector apparatus is operably connected to the optical coupler for detecting and measuring the intensity of the reflected light and comparing same to the light source intensity whereby the magnetic field strength may be calculated.

  15. Source analysis of beta-synchronisation and cortico-muscular coherence after movement termination based on high resolution electroencephalography.

    PubMed

    Muthuraman, Muthuraman; Tamás, Gertrúd; Hellriegel, Helge; Deuschl, Günther; Raethjen, Jan

    2012-01-01

    We hypothesized that post-movement beta synchronization (PMBS) and cortico-muscular coherence (CMC) during movement termination relate to each other and have similar role in sensorimotor integration. We calculated the parameters and estimated the sources of these phenomena.We measured 64-channel EEG simultaneously with surface EMG of the right first dorsal interosseus muscle in 11 healthy volunteers. In Task1, subjects kept a medium-strength contraction continuously; in Task2, superimposed on this movement, they performed repetitive self-paced short contractions. In Task3 short contractions were executed alone. Time-frequency analysis of the EEG and CMC was performed with respect to the offset of brisk movements and averaged in each subject. Sources of PMBS and CMC were also calculated.High beta power in Task1, PMBS in Task2-3, and CMC in Task1-2 could be observed in the same individual frequency bands. While beta synchronization in Task1 and PMBS in Task2-3 appeared bilateral with contralateral predominance, CMC in Task1-2 was strictly a unilateral phenomenon; their main sources did not differ contralateral to the movement in the primary sensorimotor cortex in 7 of 11 subjects in Task1, and in 6 of 9 subjects in Task2. In Task2, CMC and PMBS had the same latency but their amplitudes did not correlate with each other. In Task2, weaker PMBS source was found bilaterally within the secondary sensory cortex, while the second source of CMC was detected in the premotor cortex, contralateral to the movement. In Task3, weaker sources of PMBS could be estimated in bilateral supplementary motor cortex and in the thalamus. PMBS and CMC appear simultaneously at the end of a phasic movement possibly suggesting similar antikinetic effects, but they may be separate processes with different active functions. Whereas PMBS seems to reset the supraspinal sensorimotor network, cortico-muscular coherence may represent the recalibration of cortico-motoneuronal and spinal systems.

  16. Weibull models of fracture strengths and fatigue behavior of dental resins in flexure and shear.

    PubMed

    Baran, G R; McCool, J I; Paul, D; Boberick, K; Wunder, S

    1998-01-01

    In estimating lifetimes of dental restorative materials, it is useful to have available data on the fatigue behavior of these materials. Current efforts at estimation include several untested assumptions related to the equivalence of flaw distributions sampled by shear, tensile, and compressive stresses. Environmental influences on material properties are not accounted for, and it is unclear if fatigue limits exist. In this study, the shear and flexural strengths of three resins used as matrices in dental restorative composite materials were characterized by Weibull parameters. It was found that shear strengths were lower than flexural strengths, liquid sorption had a profound effect on characteristic strengths, and the Weibull shape parameter obtained from shear data differed for some materials from that obtained in flexure. In shear and flexural fatigue, a power law relationship applied for up to 250,000 cycles; no fatigue limits were found, and the data thus imply only one flaw population is responsible for failure. Again, liquid sorption adversely affected strength levels in most materials (decreasing shear strengths and flexural strengths by factors of 2-3) and to a greater extent than did the degree of cure or material chemistry.

  17. Effect of wear on the burst strength of l-80 steel casing

    NASA Astrophysics Data System (ADS)

    Irawan, S.; Bharadwaj, A. M.; Temesgen, B.; Karuppanan, S.; Abdullah, M. Z. B.

    2015-12-01

    Casing wear has recently become one of the areas of research interest in the oil and gas industry especially in extended reach well drilling. The burst strength of a worn out casing is one of the significantly affected mechanical properties and is yet an area where less research is done The most commonly used equations to calculate the resulting burst strength after wear are Barlow, the initial yield burst, the full yield burst and the rupture burst equations. The objective of this study was to estimate casing burst strength after wear through Finite Element Analysis (FEA). It included calculation and comparison of the different theoretical bursts pressures with the simulation results along with effect of different wear shapes on L-80 casing material. The von Misses stress was used in the estimation of the burst pressure. The result obtained shows that the casing burst strength decreases as the wear percentage increases. Moreover, the burst strength value of the casing obtained from the FEA has a higher value compared to the theoretical burst strength values. Casing with crescent shaped wear give the highest burst strength value when simulated under nonlinear analysis.

  18. Estimation of Ultimate Tensile Strength of dentin Using Finite Element Analysis from Endodontically Treated Tooth

    NASA Astrophysics Data System (ADS)

    Sinthaworn, S.; Puengpaiboon, U.; Warasetrattana, N.; Wanapaisarn, S.

    2018-01-01

    Endodontically treated teeth were simulated by finite element analysis in order to estimate ultimate tensile strength of dentin. Structures of the endodontically treated tooth cases are flared root canal, restored with different number of fiber posts {i.e. resin composite core without fiber post (group 1), fiber post No.3 with resin composite core (group 2) and fiber post No.3 accessory 2 fiber posts No.0 with resin composite core (group 3)}. Elastic modulus and Poisson’s ratio of materials were selected from literatures. The models were loaded by the average fracture resistances load of each groups (group 1: 361.80 N, group 2: 559.46 N, group 3: 468.48 N) at 135 degree angulation in respect to the longitudinal axis of the teeth. The stress analysis and experimental confirm that fracture zone is at dentin area. To estimate ultimate tensile strength of dentin, trial and error of ultimate tensile strength were tested to obtain factor of safety (FOS) equal to 1.00. The result reveals that ultimate tensile strength of dentin of group 1, 2, 3 are 38.89, 30.96, 37.19 MPa, respectively.

  19. Does the light source affect the repairability of composite resins?

    PubMed

    Karaman, Emel; Gönülol, Nihan

    2014-01-01

    The aim of this study was to examine the effect of the light source on the microshear bond strength of different composite resins repaired with the same substrate. Thirty cylindrical specimens of each composite resin--Filtek Silorane, Filtek Z550 (3M ESPE), Gradia Direct Anterior (GC), and Aelite Posterior (BISCO)--were prepared and light-cured with a QTH light curing unit (LCU). The specimens were aged by thermal cycling and divided into three subgroups according to the light source used--QTH, LED, or PAC (n = 10). They were repaired with the same substrate and a Clearfil Repair Kit (Kuraray). The specimens were light-cured and aged for 1 week in distilled water at 37 °C. The microshear bond strength and failure modes were assessed. There was no significant difference in the microshear bond strength values among the composite resins, except for the Filtek Silorane group that showed significantly lower bond strength values when polymerized with the PAC unit compared to the QTH or LED unit. In conclusion, previously placed dimethacrylate-based composites can be repaired with different light sources; however, if the composite to be repaired is silorane-based, then using a QTH or LED device may be the best option.

  20. Carbon source-sink limitations differ between two species with contrasting growth strategies.

    PubMed

    Burnett, Angela C; Rogers, Alistair; Rees, Mark; Osborne, Colin P

    2016-11-01

    Understanding how carbon source and sink strengths limit plant growth is a critical knowledge gap that hinders efforts to maximize crop yield. We investigated how differences in growth rate arise from source-sink limitations, using a model system comparing a fast-growing domesticated annual barley (Hordeum vulgare cv. NFC Tipple) with a slow-growing wild perennial relative (Hordeum bulbosum). Source strength was manipulated by growing plants at sub-ambient and elevated CO 2 concentrations ([CO 2 ]). Limitations on vegetative growth imposed by source and sink were diagnosed by measuring relative growth rate, developmental plasticity, photosynthesis and major carbon and nitrogen metabolite pools. Growth was sink limited in the annual but source limited in the perennial. RGR and carbon acquisition were higher in the annual, but photosynthesis responded weakly to elevated [CO 2 ] indicating that source strength was near maximal at current [CO 2 ]. In contrast, photosynthetic rate and sink development responded strongly to elevated [CO 2 ] in the perennial, indicating significant source limitation. Sink limitation was avoided in the perennial by high sink plasticity: a marked increase in tillering and root:shoot ratio at elevated [CO 2 ], and lower non-structural carbohydrate accumulation. Alleviating sink limitation during vegetative development could be important for maximizing growth of elite cereals under future elevated [CO 2 ]. © 2016 John Wiley & Sons Ltd.

  1. Analysis of the influencing factors of PAEs volatilization from typical plastic products.

    PubMed

    Chen, Weidong; Chi, Chenchen; Zhou, Chen; Xia, Meng; Ronda, Cees; Shen, Xueyou

    2018-04-01

    The primary emphasis of this research was to investigate the foundations of phthalate (PAEs) pollutant source researches and then firstly confirmed the concept of the coefficient of volatile strength, namely phthalate total content in per unit mass and unit surface area of pollutant sources. Through surveying and evaluating the coefficient of volatile strength of PAEs from typical plastic products, this research carried out reasonable classification of PAEs pollutant sources into three categories and then investigated the relationship amongst the coefficient of volatile strength as well as other environmental factors and the concentration level of total PAEs in indoor air measured in environment chambers. Research obtained phthalate concentration results under different temperature, humidity, the coefficient of volatile strength and the closed time through the chamber experiment. In addition, this study further explored the correlation and ratio of influencing factors that affect the concentration level of total PAEs in environment chambers, including environmental factors, the coefficient of volatile strengths of PAEs and contents of total PAEs in plastic products. The research created an improved database system of phthalate the coefficient of volatile strengths of each type of plastic goods, and tentatively revealed that the volatile patterns of PAEs from different typical plastic goods, finally confirmed that the coefficient of volatile strengths of PAEs is a major factor that affects the indoor air total PAEs concentration, which laid a solid foundation for further establishing the volatile equation of PAEs from plastic products. Copyright © 2017. Published by Elsevier B.V.

  2. Glottal aerodynamics in compliant, life-sized vocal fold models

    NASA Astrophysics Data System (ADS)

    McPhail, Michael; Dowell, Grant; Krane, Michael

    2013-11-01

    This talk presents high-speed PIV measurements in compliant, life-sized models of the vocal folds. A clearer understanding of the fluid-structure interaction of voiced speech, how it produces sound, and how it varies with pathology is required to improve clinical diagnosis and treatment of vocal disorders. Physical models of the vocal folds can answer questions regarding the fundamental physics of speech, as well as the ability of clinical measures to detect the presence and extent of disorder. Flow fields were recorded in the supraglottal region of the models to estimate terms in the equations of fluid motion, and their relative importance. Experiments were conducted over a range of driving pressures with flow rates, given by a ball flowmeter, and subglottal pressures, given by a micro-manometer, reported for each case. Imaging of vocal fold motion, vector fields showing glottal jet behavior, and terms estimated by control volume analysis will be presented. The use of these results for a comparison with clinical measures, and for the estimation of aeroacoustic source strengths will be discussed. Acknowledge support from NIH R01 DC005642.

  3. Extraversion, Neuroticism and Strength of the Nervous System

    ERIC Educational Resources Information Center

    Frigon, Jean-Yves

    1976-01-01

    The hypothesized identity of the dimensions of extraversion-introversion and strength of the nervous system was tested on four groups of nine subjects (neurotic extraverts, stable extraverts, neurotic introverts, stable introverts). Strength of the subjects' nervous system was estimated using the electroencephalographic (EEG) variant of extinction…

  4. Direct estimation of evoked hemoglobin changes by multimodality fusion imaging

    PubMed Central

    Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.

    2009-01-01

    In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411

  5. Adaptations in humans for assessing physical strength from the voice

    PubMed Central

    Sell, Aaron; Bryant, Gregory A.; Cosmides, Leda; Tooby, John; Sznycer, Daniel; von Rueden, Christopher; Krauss, Andre; Gurven, Michael

    2010-01-01

    Recent research has shown that humans, like many other animals, have a specialization for assessing fighting ability from visual cues. Because it is probable that the voice contains cues of strength and formidability that are not available visually, we predicted that selection has also equipped humans with the ability to estimate physical strength from the voice. We found that subjects accurately assessed upper-body strength in voices taken from eight samples across four distinct populations and language groups: the Tsimane of Bolivia, Andean herder-horticulturalists and United States and Romanian college students. Regardless of whether raters were told to assess height, weight, strength or fighting ability, they produced similar ratings that tracked upper-body strength independent of height and weight. Male voices were more accurately assessed than female voices, which is consistent with ethnographic data showing a greater tendency among males to engage in violent aggression. Raters extracted information about strength from the voice that was not supplied from visual cues, and were accurate with both familiar and unfamiliar languages. These results provide, to our knowledge, the first direct evidence that both men and women can accurately assess men's physical strength from the voice, and suggest that estimates of strength are used to assess fighting ability. PMID:20554544

  6. Historical emissions critical for mapping decarbonization pathways

    NASA Astrophysics Data System (ADS)

    Majkut, J.; Kopp, R. E.; Sarmiento, J. L.; Oppenheimer, M.

    2016-12-01

    Policymakers have set a goal of limiting temperature increase from human influence on the climate. This motivates the identification of decarbonization pathways to stabilize atmospheric concentrations of CO2. In this context, the future behavior of CO2 sources and sinks define the CO2 emissions necessary to meet warming thresholds with specified probabilities. We adopt a simple model of the atmosphere-land-ocean carbon balance to reflect uncertainty in how natural CO2 sinks will respond to increasing atmospheric CO2 and temperature. Bayesian inversion is used to estimate the probability distributions of selected parameters of the carbon model. Prior probability distributions are chosen to reflect the behavior of CMIP5 models. We then update these prior distributions by running historical simulations of the global carbon cycle and inverting with observationally-based inventories and fluxes of anthropogenic carbon in the ocean and atmosphere. The result is a best-estimate of historical CO2 sources and sinks and a model of how CO2 sources and sinks will vary in the future under various emissions scenarios, with uncertainty. By linking the carbon model to a simple climate model, we calculate emissions pathways and carbon budgets consistent with meeting specific temperature thresholds and identify key factors that contribute to remaining uncertainty. In particular, we show how the assumed history of CO2 emissions from land use change (LUC) critically impacts estimates of the strength of the land CO2 sink via CO2 fertilization. Different estimates of historical LUC emissions taken from the literature lead to significantly different parameterizations of the carbon system. High historical CO2 emissions from LUC lead to a more robust CO2 fertilization effect, significantly lower future atmospheric CO2 concentrations, and an increased amount of CO2 that can be emitted to satisfy temperature stabilization targets. Thus, in our model, historical LUC emissions have a significant impact on allowable carbon budgets under temperture targets.

  7. Assessing D-Region Ionospheric Electron Densities with Transionospheric VLF Signals

    NASA Astrophysics Data System (ADS)

    Worthington, E. R.; Cohen, M.

    2016-12-01

    Very Low Frequency (VLF, 3-30 kHz) electromagnetic radiation emitted from ground-based sources, such as VLF transmitters or lightning strokes, is generally confined between the Earth's surface and the base of the ionosphere. These boundaries result in waveguide-like propagation modes that travel away from the source, often over great distances. In the vicinity of the source, a unique interference pattern exists that is largely determined by the D-region of the ionosphere which forms the upper boundary. A small portion of this VLF radiation escapes the ionosphere allowing the waveguide interference pattern to be observable to satellites in low-earth orbit (LEO). Techniques for estimating D-region electron densities using VLF satellite measurements are presented. These techniques are then validated using measurements taken by the satellite DEMETER. During its six-year mission, DEMETER completed hundreds of passes above well-characterized VLF transmitters while taking measurements of electric and magnetic field strengths. The waveguide interference pattern described above is clearly visible in these measurements, and features from the interference pattern are used to derive D-region electron density profiles.

  8. The glacial iron cycle from source to export

    NASA Astrophysics Data System (ADS)

    Hawkings, J.; Wadham, J. L.; Tranter, M.; Raiswell, R.; Benning, L. G.; Statham, P. J.; Tedstone, A. J.; Nienow, P. W.; Telling, J.; Bagshaw, E.; Simmons, S. L.

    2014-12-01

    Nutrient availability limits primary production in large sectors of the world's oceans. Iron is the major limiting nutrient in around one third of the oceanic euphotic zone, most significantly in the Southern Ocean proximal to Antarctica. In these areas the availability of bioavailable iron can influence the amount of primary production, and thus the strength of the biological pump and associated carbon drawdown from the atmosphere. Despite experiencing widespread iron limitation, the Polar oceans are among the most productive on Earth. Due to the extreme cold, remoteness and their perceived "stasis", ice sheets have previously been though of as insignificant in global biogeochemical cycles. However, large marine algal blooms have been observed in iron-limited areas where glacial influence is large, and it is possible that these areas are stimulated by glacial bioavailable iron input. Here we discuss the importance of the Greenland and Antarctic ice sheets in the global iron cycle. Using field collected trace element data, bulk meltwater chemistry and mineralogical analysis, including photomicrographs, EELS and XANES, we present, for the first time, a conceptual model of the glacial iron cycle from source to export. Using this data we discuss the sources of iron in glacial meltwater, transportation and alteration through the glacial system, and subsequent export to downstream environments. Data collected in 2012 and 2013 from two different Greenlandic glacial catchments are shown, with the most detailed breakdown of iron speciation and concentrations in glacial areas yet reported. Furthermore, the first data from Greenlandic icebergs is presented, allowing meltwater-derived and iceberg-derived iron export to be compared, and the influence of both in marine productivity to be estimated. Using our conceptual model and flux estimates from our dataset, glacial iron delivery in both the northern and southern hemisphere is discussed. Finally, we compare our flux estimates to other major iron sources to the polar regions such as aeolian dust, and discuss potential implications of increased melting of the ice sheets on the global iron cycle in the future.

  9. [Simulation of CO2 exchange between forest canopy and atmosphere].

    PubMed

    Diao, Yiwei; Wang, Anzhi; Jin, Changjie; Guan, Dexin; Pei, Tiefan

    2006-12-01

    Estimating the scalar source/sink distribution of CO2 and its vertical fluxes within and above forest canopy continues to be a critical research problem in biosphere-atmosphere exchange processes and plant ecology. With broad-leaved Korean pine forest in Changbai Mountains as test object, and based on Raupach's localized near field theory, the source/sink and vertical flux distribution of CO2 within and above forest canopy were modeled through an inverse Lagrangian dispersion analysis. This model correctly predicted a strong positive CO2 source strength in the deeper layers of the canopy due to soil-plant respiration, and a strong CO2 sink in the upper layers of the canopy due to the assimilation by sunlit foliage. The foliage in the top layer of canopy changed from a CO2 source in the morning to a CO2 sink in the afternoon, while the soil constituted a strong CO2 source all the day. The simulation results accorded well with the eddy covariance CO2 flux measurements within and above the canopy, and the average precision was 89%. The CO2 exchange predicted by the analysis was averagely 15% higher than that of the eddy correlation, but exhibited identical temporal trend. Atmospheric stability remarkably affected the CO2 exchange between forest canopy and atmosphere.

  10. A summary report on the search for current technologies and developers to develop depth profiling/physical parameter end effectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Q.H.

    1994-09-12

    This report documents the search strategies and results for available technologies and developers to develop tank waste depth profiling/physical parameter sensors. Sources searched include worldwide research reports, technical papers, journals, private industries, and work at Westinghouse Hanford Company (WHC) at Richland site. Tank waste physical parameters of interest are: abrasiveness, compressive strength, corrosiveness, density, pH, particle size/shape, porosity, radiation, settling velocity, shear strength, shear wave velocity, tensile strength, temperature, viscosity, and viscoelasticity. A list of related articles or sources for each physical parameters is provided.

  11. An in vitro evaluation of diametral tensile strength and flexural strength of nanocomposite vs hybrid and minifill composites cured with different light sources (QTH vs LED).

    PubMed

    Garapati, Surendra Nath; Priyadarshini; Raturi, Piyush; Shetty, Dinesh; Srikanth, K Venkata

    2013-01-01

    Composites always remained the target of discussion due to lot of controversies around it. Mechanical properties are one of them. With the introduction of new technology and emergence of various composites which combine superior strength and polish retention, nanocomposites have led to a new spark in the dentistry. A recent curing unit LED with various curing modes claims to produce higher degree of conversion. The aim of this study was to evaluate the diametral tensile strength and flexural strength of nanocomposite, hybrid and minifill composites cured with different light sources (QTH vs LED). Seventy-two samples were prepared using different specially fabricated teflon molds, 24 samples of each composite were prepared for the diametral tensile strength (ADA specification no. 27) and the flexural strength (ISO 4049) of the 12 samples, six were cured with LED (Soft Start curing profile) and other six with QTH curing light and tested on a universal testing machine. The nanocomposite had highest diametral tensile strength and flexural strength which were equivalent to the hybrid composite and superior than the minifill composite. With the combination of superior esthetics and other optimized physical properties, this novel nanocomposite system would be useful for all posterior and anterior applications.

  12. Photoexcitation and ionization in carbon dioxide - Theoretical studies in the separated-channel static-exchange approximation

    NASA Technical Reports Server (NTRS)

    Padial, N.; Csanak, G.; Mckoy, B. V.; Langhoff, P. W.

    1981-01-01

    Vertical-electronic static-exchange photoexcitation and ionization cross sections are reported which provide a first approximation to the complete dipole spectrum of CO2. Separated-channel static-exchange calculations of vertical-electronic transition energies and oscillator strengths, and Stieltjes-Chebyshev moment methods were used in the development. Detailed comparisons were made of the static-exchange excitation and ionization spectra with photoabsorption, electron-impact excitation, and quantum-defect estimates of discrete transition energies and intensities, and with partial-channel photoionization cross sections obtained from fluorescence measurements and from tunable-source and (e, 2e) photoelectron spectroscopy. Results show that the separate-channel static-exchange approximation is generally satisfactory in CO2.

  13. Mid-infrared InAs/AlGaSb superlattice quantum-cascade lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohtani, K.; Fujita, K.; Ohno, H.

    2005-11-21

    We report on the demonstration of mid-infrared InAs/AlGaSb superlattice quantum-cascade lasers operating at 10 {mu}m. The laser structures are grown on n-InAs (100) substrate by solid-source molecular-beam epitaxy. An InAs/AlGaSb chirped superlattice structure providing a large oscillator strength and fast carrier depopulation is employed as the active part. The observed minimum threshold current density at 80 K is 0.7 kA/cm{sup 2}, and the maximum operation temperature in pulse mode is 270 K. The waveguide loss of an InAs plasmon waveguide is estimated, and the factors that determine the operation temperature are discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Wanyu R.; Sidheswaran, Meera; sullivan, Douglas

    The HZEB research program aims to generate information needed to develop new science-based commercial building ventilation rate (VR) standards that balance the dual objectives of increasing energy efficiency and maintaining acceptable indoor air quality. This interim report describes the preliminary results from one HZEB field study on retail stores. The primary purpose of this study is to estimate the whole-building source strengths of contaminant of concerns (COCs). This information is needed to determine the VRs necessary to maintain indoor concentrations of COCs below applicable health guidelines.The goal of this study is to identify contaminants in retail stores that should bemore » controlled via ventilation, and to determine the minimum VRs that would satisfy the occupant health and odor criteria.« less

  15. Bone geometry, strength, and muscle size in runners with a history of stress fracture.

    PubMed

    Popp, Kristin L; Hughes, Julie M; Smock, Amanda J; Novotny, Susan A; Stovitz, Steven D; Koehler, Scott M; Petit, Moira A

    2009-12-01

    Our primary aim was to explore differences in estimates of tibial bone strength, in female runners with and without a history of stress fractures. Our secondary aim was to explore differences in bone geometry, volumetric density, and muscle size that may explain bone strength outcomes. A total of 39 competitive distance runners aged 18-35 yr, with (SFX, n = 19) or without (NSFX, n = 20) a history of stress fracture were recruited for this cross-sectional study. Peripheral quantitative computed tomography (XCT 3000; Orthometrix, White Plains, NY) was used to assess volumetric bone mineral density (vBMD, mg x mm(-3)), bone area (ToA, mm(2)), and estimated compressive bone strength (bone strength index (BSI) = ToA x total volumetric density (ToD(2))) at the distal tibia (4%). Total (ToA, mm(2)) and cortical (CoA, mm(2)) bone area, cortical vBMD, and estimated bending strength (strength-strain index (SSIp), mm(3)) were measured at the 15%, 25%, 33%, 45%, 50%, and 66% sites. Muscle cross-sectional area (MCSA) was measured at the 50% and 66% sites. Participants in the SFX group had significantly smaller (7%-8%) CoA at the 45%, 50%, and 66% sites (P

  16. Estimation of strength parameters of small-bore metal-polymer pipes

    NASA Astrophysics Data System (ADS)

    Shaydakov, V. V.; Chernova, K. V.; Penzin, A. V.

    2018-03-01

    The paper presents results from a set of laboratory studies of strength parameters of small-bore metal-polymer pipes of type TG-5/15. A wave method was used to estimate the provisional modulus of elasticity of the metal-polymer material of the pipes. Longitudinal deformation, transverse deformation and leak-off pressure were determined experimentally, with considerations for mechanical damage and pipe bend.

  17. A physical mechanism for the prediction of the sunspot number during solar cycle 21. [graphs (charts)

    NASA Technical Reports Server (NTRS)

    Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.

    1978-01-01

    On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.

  18. The Use of Satellite-Measured Aerosol Optical Depth to Constrain Biomass Burning Emissions Source Strength in a Global Model GOCART

    NASA Technical Reports Server (NTRS)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Soja, Amber; Kuesera, Tom; harshvardhan, E. M.

    2012-01-01

    Small particles in the atmosphere, called "atmospheric aerosol" have a direct effect on Earth climate through scattering and absorbing sunlight, and also an indirect effect by changing the properties of clouds, as they interact with solar radiation as well. Aerosol typically stays in the atmosphere for several days, and can be transported long distances, affecting air quality, visibility, and human health not only near the source, but also far downwind. Smoke from vegetation fires is one of the main sources of atmospheric aerosol; other sources include anthropogenic pollution, dust, and sea salt. Chemistry transport models (CTMs) are among the major tools for studying the atmospheric and climate effects of aerosol. Due to the considerable variation of aerosol concentrations and properties on many temporal and spatial scales, and the complexity of the processes involved, the uncertainties in aerosol effects on climate are large, as is featured in the latest report of Intergovernmental Panel on Climate Change (IPCC) in 2007. Reducing this uncertainty in the models is very important both for predicting future climate scenarios and for regional air quality forecasting and mitigation. During vegetation fires, also called biomass burning (BB) events, complex mixture of gases and particles is emitted. The amount of BB emissions is usually estimated taking into account the intensity and size of the fire and the properties of burning vegetation. These estimates are input into CTMs to simulate BB aerosol. Unfortunately, due to large variability of fire and vegetation properties, the quantity of BB emissions is very difficult to estimate and BB emission inventories provide numbers that can differ by up to the order of magnitude in some regions. Larger uncertainties in data input make uncertainties in model output larger as well. A powerful way to narrow the range of possible model estimates is to compare model output to observations. We use satellite observations of aerosol properties, specifically aerosol optical depth, which is directly proportional to the amount of aerosol in the atmosphere, and compare it to the model output. Assuming the model represents aerosol transport and particle properties correctly, the amount of BB emissions determines the simulated aerosol optical depth. In this study, we explore the regional performance of 13 commonly used emission estimates. These are each input to global Goddard Chemistry Aerosol Radiation and Transport (GOCART) model. We then evaluate how well each emission estimate reproduces the smoke aerosol optical depth measured by the MODIS instrument. We compared GOCART-simulate aerosol optical depth with that measured from the satellite for 124 fire cases around the world during 2006 and 2007. We summarize the regional performance of each emission inventory and discuss reasons for their differences by considering the assumptions made during their development. We also show that because stronger wind disperses smoke plumes more readily, in cases with stronger wind, a larger increase in emission amount is needed to increase aerosol optical depth. In quiet, low-wind-speed environments, BB emissions produce a more significant increase in aerosol optical depth, other things being equal. Using the region-specific, quantitative relationships derived in our paper, together with the wind speed obtained from another source for a given fire case, we can constrain the amount of emission required in the model to reproduce the observations. The results of this paper are useful to the developers of BB emission inventories, as they show the strengths and weaknesses of individual emission inventories in different regions of the globe, and also for modelers who use these inventories and wish to improve their model results.

  19. Absolute calorimetric calibration of low energy brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Stump, Kurt E.

    In the past decade there has been a dramatic increase in the use of permanent radioactive source implants in the treatment of prostate cancer. A small radioactive source encapsulated in a titanium shell is used in this type of treatment. The radioisotopes used are generally 125I or 103Pd. Both of these isotopes have relatively short half-lives, 59.4 days and 16.99 days, respectively, and have low-energy emissions and a low dose rate. These factors make these sources well suited for this application, but the calibration of these sources poses significant metrological challenges. The current standard calibration technique involves the measurement of ionization in air to determine the source air-kerma strength. While this has proved to be an improvement over previous techniques, the method has been shown to be metrologically impure and may not be the ideal means of calbrating these sources. Calorimetric methods have long been viewed to be the most fundamental means of determining source strength for a radiation source. This is because calorimetry provides a direct measurement of source energy. However, due to the low energy and low power of the sources described above, current calorimetric methods are inadequate. This thesis presents work oriented toward developing novel methods to provide direct and absolute measurements of source power for low-energy low dose rate brachytherapy sources. The method is the first use of an actively temperature-controlled radiation absorber using the electrical substitution method to determine total contained source power of these sources. The instrument described operates at cryogenic temperatures. The method employed provides a direct measurement of source power. The work presented here is focused upon building a metrological foundation upon which to establish power-based calibrations of clinical-strength sources. To that end instrument performance has been assessed for these source strengths. The intent is to establish the limits of the current instrument to direct further work in this field. It has been found that for sources with powers above approximately 2 muW the instrument is able to determine the source power in agreement to within less than 7% of what is expected based upon the current source strength standard. For lower power sources, the agreement is still within the uncertainty of the power measurement, but the calorimeter noise dominates. Thus, to provide absolute calibration of lower power sources additional measures must be taken. The conclusion of this thesis describes these measures and how they will improve the factors that limit the current instrument. The results of the work presented in this thesis establish the methodology of active radiometric calorimetey for the absolute calibration of radioactive sources. The method is an improvement over previous techniques in that there is no reliance upon the thermal properties of the materials used or the heat flow pathways on the source measurements. The initial work presented here will help to shape future refinements of this technique to allow lower power sources to be calibrated with high precision and high accuracy.

  20. Effect of shallow angles on compressive strength of biaxial and triaxial laminates.

    PubMed

    Jia, Hongli; Yang, Hyun-Ik

    2016-01-01

    Biaxial (BX) and triaxial (TX) composite laminates with ±45° angled plies have been widely used in wind turbine blades. As the scale of blades increases, BX and TX laminates with shallow-angled plies (i.e. off-axis ply angle <45°) might be utilized for reducing mass and/or improving performance. The compressive properties of shallow-angled BX and TX laminates are critical considering their locations in a wind turbine blade, and therefore in this study, the uniaxial static compression tests were conducted using BX and TX laminates with angled-plies of ±45°, ±35°, and ±25°, for the purpose of evaluation. On the other hand, Mori-Tanaka mean field homogenization method was employed to predict elastic constants of plies in BX and TX laminates involved in tests; linear regression analyses of experimentally measured ply strengths collected from various sources were then performed to estimate strengths of plies in BX and TX laminates; finally, Tsai-Wu, Hashin, and Puck failure criteria were chosen to predict compressive strengths of BX and TX laminates. Comparison between theoretical predictions and test results were carried out to illustrate the effectiveness of each criterion. The compressive strength of BX laminate decreases as ply angle increases, and the trend was successfully predicted by all three failure criteria. For TX laminates, ±35° angled plies rather than ±45° angled plies led to the lowest laminate compressive strength. Hashin and Puck criteria gave good predictions at certain ply angles for TX laminates, but Tsai-Wu criterion was able to capture the unexpected strength variation of TX laminates with ply angle. It was concluded that the transverse tensile stress in 0° plies of TX laminates, which attains its maximum when the off-axis ply angle is 35°, is the dominant factor in failure determination if using Tsai-Wu criterion. This explains the unexpected strength variation of TX laminates with ply angle, and also indicates that proper selection of ply angle is the key to fully utilizing the advantages of shallow-angled laminates.

  1. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  2. Neural correlates of confidence during item recognition and source memory retrieval: evidence for both dual-process and strength memory theories.

    PubMed

    Hayes, Scott M; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto

    2011-12-01

    Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength process (strength theory). To investigate these issues, the current fMRI study measured activity during retrieval of memories that differed quantitatively in terms of strength (high vs. low-confidence trials) and qualitatively in terms of recollection versus familiarity (source vs. item memory tasks). Support for each theory varied depending on which node of the episodic memory network was considered. Results from MTL best fit a dual-process account, as a dissociation was found between a right hippocampal region showing high-confidence activity during the source memory task and bilateral rhinal regions showing high-confidence activity during the item memory task. Within PFC, several left-lateralized regions showed greater activity for source than item memory, consistent with recollective orienting, whereas a right-lateralized ventrolateral area showed low-confidence activity in both tasks, consistent with monitoring processes. Parietal findings were generally consistent with strength theory, with dorsal areas showing low-confidence activity and ventral areas showing high-confidence activity in both tasks. This dissociation fits with an attentional account of parietal functions during episodic retrieval. The results suggest that both dual-process and strength theories are partly correct, highlighting the need for an integrated model that links to more general cognitive theories to account for observed neural activity during episodic memory retrieval.

  3. Neural Correlates of Confidence during Item Recognition and Source Memory Retrieval: Evidence for Both Dual-process and Strength Memory Theories

    PubMed Central

    Hayes, Scott M.; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto

    2012-01-01

    Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength process (strength theory). To investigate these issues, the current fMRI study measured activity during retrieval of memories that differed quantitatively in terms of strength (high vs. low-confidence trials) and qualitatively in terms of recollection versus familiarity (source vs. item memory tasks). Support for each theory varied depending on which node of the episodic memory network was considered. Results from MTL best fit a dual-process account, as a dissociation was found between a right hippocampal region showing high-confidence activity during the source memory task and bilateral rhinal regions showing high-confidence activity during the item memory task. Within PFC, several left-lateralized regions showed greater activity for source than item memory, consistent with recollective orienting, whereas a right-lateralized ventrolateral area showed low-confidence activity in both tasks, consistent with monitoring processes. Parietal findings were generally consistent with strength theory, with dorsal areas showing low-confidence activity and ventral areas showing high-confidence activity in both tasks. This dissociation fits with an attentional account of parietal functions during episodic retrieval. The results suggest that both dual-process and strength theories are partly correct, highlighting the need for an integrated model that links to more general cognitive theories to account for observed neural activity during episodic memory retrieval. PMID:21736454

  4. Soil HONO Emissions and Its Potential Impact on the Atmospheric Chemistry and Nitrogen Cycle

    NASA Astrophysics Data System (ADS)

    Su, H.; Chen, C.; Zhang, Q.; Poeschl, U.; Cheng, Y.

    2014-12-01

    Hydroxyl radicals (OH) are a key species in atmospheric photochemistry. In the lower atmosphere, up to ~30% of the primary OH radical production is attributed to the photolysis of nitrous acid (HONO), and field observations suggest a large missing source of HONO. The dominant sources of N(III) in soil, however, are biological nitrification and denitrification processes, which produce nitrite ions from ammonium (by nitrifying microbes) as well as from nitrate (by denitrifying microbes). We show that soil nitrite can release HONO and explain the reported strength and diurnal variation of the missing source. The HONO emissions rates are estimated to be comparable to that of nitric oxide (NO) and could be an important source of atmospheric reactive nitrogen. Fertilized soils appear to be particularly strong sources of HONO. Thus, agricultural activities and land-use changes may strongly influence the oxidizing capacity of the atmosphere. A new HONO-DNDC model was developed to simulate the evolution of HONO emissions in agriculture ecosystems. Because of the widespread occurrence of nitrite-producing microbes and increasing N and acid deposition, the release of HONO from soil may also be important in natural environments, including forests and boreal regions. Reference: Su, H. et al., Soil Nitrite as a Source of Atmospheric HONO and OH Radicals, Science, 333, 1616-1618, 10.1126/science.1207687, 2011.

  5. Quantitative methods for estimating the anisotropy of the strength properties and the phase composition of Mg-Al alloys

    NASA Astrophysics Data System (ADS)

    Betsofen, S. Ya.; Kolobov, Yu. R.; Volkova, E. F.; Bozhko, S. A.; Voskresenskaya, I. I.

    2015-04-01

    Quantitative methods have been developed to estimate the anisotropy of the strength properties and to determine the phase composition of Mg-Al alloys. The efficiency of the methods is confirmed for MA5 alloy subjected to severe plastic deformation. It is shown that the Taylor factors calculated for basal slip averaged over all orientations of a polycrystalline aggregate with allowance for texture can be used for a quantitative estimation of the contribution of the texture of semifinished magnesium alloy products to the anisotropy of their strength properties. A technique of determining the composition of a solid solution and the intermetallic phase Al12Mg17 content is developed using the measurement of the lattice parameters of the solid solution and the known dependence of these lattice parameters on the composition.

  6. Design and Mechanical Stability Analysis of the Interaction Region for the Inverse Compton Scattering Gamma-Ray Source Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Khizhanok, Andrei

    Development of a compact source of high-spectral brilliance and high impulse frequency gamma rays has been in scope of Fermi National Accelerator Laboratory for quite some time. Main goal of the project is to develop a setup to support gamma rays detection test and gamma ray spectroscopy. Potential applications include but not limited to nuclear astrophysics, nuclear medicine, oncology ('gamma knife'). Present work covers multiple interconnected stages of development of the interaction region to ensure high levels of structural strength and vibrational resistance. Inverse Compton scattering is a complex phenomenon, in which charged particle transfers a part of its energy to a photon. It requires extreme precision as the interaction point is estimated to be 20 microm. The slightest deflection of the mirrors will reduce effectiveness of conversion by orders of magnitude. For acceptable conversion efficiency laser cavity also must have >1000 finesse value, which requires a trade-off between size, mechanical stability, complexity, and price of the setup. This work focuses on advantages and weak points of different designs of interaction regions as well as in-depth description of analyses performed. This includes laser cavity amplification and finesse estimates, natural frequency mapping, harmonic analysis. Structural analysis is required as interaction must occur under high vacuum conditions.

  7. Binaural segregation in multisource reverberant environments.

    PubMed

    Roman, Nicoleta; Srinivasan, Soundararajan; Wang, DeLiang

    2006-12-01

    In a natural environment, speech signals are degraded by both reverberation and concurrent noise sources. While human listening is robust under these conditions using only two ears, current two-microphone algorithms perform poorly. The psychological process of figure-ground segregation suggests that the target signal is perceived as a foreground while the remaining stimuli are perceived as a background. Accordingly, the goal is to estimate an ideal time-frequency (T-F) binary mask, which selects the target if it is stronger than the interference in a local T-F unit. In this paper, a binaural segregation system that extracts the reverberant target signal from multisource reverberant mixtures by utilizing only the location information of target source is proposed. The proposed system combines target cancellation through adaptive filtering and a binary decision rule to estimate the ideal T-F binary mask. The main observation in this work is that the target attenuation in a T-F unit resulting from adaptive filtering is correlated with the relative strength of target to mixture. A comprehensive evaluation shows that the proposed system results in large SNR gains. In addition, comparisons using SNR as well as automatic speech recognition measures show that this system outperforms standard two-microphone beamforming approaches and a recent binaural processor.

  8. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  9. Human adaptations for the visual assessment of strength and fighting ability from the body and face

    PubMed Central

    Sell, Aaron; Cosmides, Leda; Tooby, John; Sznycer, Daniel; von Rueden, Christopher; Gurven, Michael

    2008-01-01

    Selection in species with aggressive social interactions favours the evolution of cognitive mechanisms for assessing physical formidability (fighting ability or resource-holding potential). The ability to accurately assess formidability in conspecifics has been documented in a number of non-human species, but has not been demonstrated in humans. Here, we report tests supporting the hypothesis that the human cognitive architecture includes mechanisms that assess fighting ability—mechanisms that focus on correlates of upper-body strength. Across diverse samples of targets that included US college students, Bolivian horticulturalists and Andean pastoralists, subjects in the US were able to accurately estimate the physical strength of male targets from photos of their bodies and faces. Hierarchical linear modelling shows that subjects were extracting cues of strength that were largely independent of height, weight and age, and that corresponded most strongly to objective measures of upper-body strength—even when the face was all that was available for inspection. Estimates of women's strength were less accurate, but still significant. These studies are the first empirical demonstration that, for humans, judgements of strength and judgements of fighting ability not only track each other, but accurately track actual upper-body strength. PMID:18945661

  10. Coseismic landslides reveal near-surface rock strength in a high-relief tectonically active setting

    USGS Publications Warehouse

    Gallen, Sean F.; Clark, Marin K.; Godt, Jonathan W.

    2014-01-01

    We present quantitative estimates of near-surface rock strength relevant to landscape evolution and landslide hazard assessment for 15 geologic map units of the Longmen Shan, China. Strength estimates are derived from a novel method that inverts earthquake peak ground acceleration models and coseismic landslide inventories to obtain material proper- ties and landslide thickness. Aggregate rock strength is determined by prescribing a friction angle of 30° and solving for effective cohesion. Effective cohesion ranges are from 70 kPa to 107 kPa for 15 geologic map units, and are approximately an order of magnitude less than typical laboratory measurements, probably because laboratory tests on hand-sized specimens do not incorporate the effects of heterogeneity and fracturing that likely control near-surface strength at the hillslope scale. We find that strength among the geologic map units studied varies by less than a factor of two. However, increased weakening of units with proximity to the range front, where precipitation and active fault density are the greatest, suggests that cli- matic and tectonic factors overwhelm lithologic differences in rock strength in this high-relief tectonically active setting.

  11. Semiparametric Bayesian commensurate survival model for post-market medical device surveillance with non-exchangeable historical data.

    PubMed

    Murray, Thomas A; Hobbs, Brian P; Lystig, Theodore C; Carlin, Bradley P

    2014-03-01

    Trial investigators often have a primary interest in the estimation of the survival curve in a population for which there exists acceptable historical information from which to borrow strength. However, borrowing strength from a historical trial that is non-exchangeable with the current trial can result in biased conclusions. In this article we propose a fully Bayesian semiparametric method for the purpose of attenuating bias and increasing efficiency when jointly modeling time-to-event data from two possibly non-exchangeable sources of information. We illustrate the mechanics of our methods by applying them to a pair of post-market surveillance datasets regarding adverse events in persons on dialysis that had either a bare metal or drug-eluting stent implanted during a cardiac revascularization surgery. We finish with a discussion of the advantages and limitations of this approach to evidence synthesis, as well as directions for future work in this area. The article's Supplementary Materials offer simulations to show our procedure's bias, mean squared error, and coverage probability properties in a variety of settings. © 2013, The International Biometric Society.

  12. Wood strength loss as a measure of decomposition in northern forest mineral soil

    Treesearch

    Martin Jurgensen; David Reed; Deborah Page-Dumroese; Peter Laks; Anne Collins; Glenn Mroz; Marek Degorski

    2006-01-01

    Wood stake weight loss has been used as an index of wood decomposition in mineral soil, but it may not give a reliable estimate in cold boreal forests where decomposition is very slow.Various wood stake strength tests have been used as surrogates of weight loss, but little is known on which test would give the best estimate of decomposition over a variety of soil...

  13. The Efficacy of Injury Prevention Programs in Adolescent Team Sports: A Meta-analysis.

    PubMed

    Soomro, Najeebullah; Sanders, Ross; Hackett, Daniel; Hubka, Tate; Ebrahimi, Saahil; Freeston, Jonathan; Cobley, Stephen

    2016-09-01

    Intensive sport participation in childhood and adolescence is an established cause of acute and overuse injury. Interventions and programs designed to prevent such injuries are important in reducing individual and societal costs associated with treatment and recovery. Likewise, they help to maintain the accrual of positive outcomes from participation, such as cardiovascular health and skill development. To date, several studies have individually tested the effectiveness of injury prevention programs (IPPs). To determine the overall efficacy of structured multifaceted IPPs containing a combination of warm-up, neuromuscular strength, or proprioception training, targeting injury reduction rates according to risk exposure time in adolescent team sport contexts. Systematic review and meta-analysis. With established inclusion criteria, studies were searched in the following databases: Cochrane Central Register of Controlled Trials, MEDLINE, SPORTDiscus, Web of Science, EMBASE, CINAHL, and AusSportMed. The keyword search terms (including derivations) included the following: adolescents, sports, athletic injuries, prevention/warm-up programs. Eligible studies were then pooled for meta-analysis with an invariance random-effects model, with injury rate ratio (IRR) as the primary outcome. Heterogeneity among studies and publication bias were tested, and subgroup analysis examined heterogeneity sources. Across 10 studies, including 9 randomized controlled trials, a pooled overall point estimate yielded an IRR of 0.60 (95% CI = 0.48-0.75; a 40% reduction) while accounting for hours of risk exposure. Publication bias assessment suggested an 8% reduction in the estimate (IRR = 0.68, 95% CI = 0.54-0.84), and the prediction interval intimated that any study estimate could still fall between 0.33 and 1.48. Subgroup analyses identified no significant moderators, although possible influences may have been masked because of data constraints. Compared with normative practices or control, IPPs significantly reduced IRRs in adolescent team sport contexts. The underlying explanations for IPP efficacy remain to be accurately identified, although they potentially relate to IPP content and improvements in muscular strength, proprioceptive balance, and flexibility. Clinical practitioners (eg, orthopaedics, physical therapists) and sports practitioners (eg, strength and conditioners, coaches) can respectively recommend and implement IPPs similar to those examined to help reduce injury rates in adolescent team sports contexts. © 2015 The Author(s).

  14. Evaluation of strength-controlling defects in paper by stress concentration analyses

    Treesearch

    John M. Considine; David W. Vahey; James W. Evans; Kevin T. Turner; Robert E. Rowlands

    2011-01-01

    Cellulosic webs, such as paper materials, are composed of an interwoven, bonded network of cellulose fibers. Strength-controlling parameters in these webs are influenced by constituent fibers and method of processing and manufacture. Instead of estimating the effect on tensile strength of each processing/manufacturing variable, this study modifies and compares the...

  15. Carbon source-sink limitations differ between two species with contrasting growth strategies: Source-sink limitations vary with growth strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, Angela C.; Rogers, A.; Rees, M.

    When we understand how carbon source and sink strengths limit plant growth we realized how critical the knowledge gap is in hindering efforts to maximize crop yield. Here, we investigated how differences in growth rate arise from source–sink limitations, using a model system comparing a fast-growing domesticated annual barley (Hordeum vulgare cv. NFC Tipple) with a slow-growing wild perennial relative (Hordeum bulbosum). Source strength was manipulated by growing plants at sub-ambient and elevated CO 2 concentrations ([CO 2]). Limitations on vegetative growth imposed by source and sink were diagnosed by measuring relative growth rate, developmental plasticity, photosynthesis and major carbonmore » and nitrogen metabolite pools. Growth was sink limited in the annual but source limited in the perennial. RGR and carbon acquisition were higher in the annual, but photosynthesis responded weakly to elevated [CO 2] indicating that source strength was near maximal at current [CO 2]. In contrast, photosynthetic rate and sink development responded strongly to elevated [CO 2] in the perennial, indicating significant source limitation. Sink limitation was avoided in the perennial by high sink plasticity: a marked increase in tillering and root:shoot ratio at elevated [CO 2], and lower non-structural carbohydrate accumulation. Finally, by alleviating sink limitation during vegetative development could be important for maximizing growth of elite cereals under future elevated [CO 2].« less

  16. Carbon source-sink limitations differ between two species with contrasting growth strategies: Source-sink limitations vary with growth strategy

    DOE PAGES

    Burnett, Angela C.; Rogers, A.; Rees, M.; ...

    2016-09-22

    When we understand how carbon source and sink strengths limit plant growth we realized how critical the knowledge gap is in hindering efforts to maximize crop yield. Here, we investigated how differences in growth rate arise from source–sink limitations, using a model system comparing a fast-growing domesticated annual barley (Hordeum vulgare cv. NFC Tipple) with a slow-growing wild perennial relative (Hordeum bulbosum). Source strength was manipulated by growing plants at sub-ambient and elevated CO 2 concentrations ([CO 2]). Limitations on vegetative growth imposed by source and sink were diagnosed by measuring relative growth rate, developmental plasticity, photosynthesis and major carbonmore » and nitrogen metabolite pools. Growth was sink limited in the annual but source limited in the perennial. RGR and carbon acquisition were higher in the annual, but photosynthesis responded weakly to elevated [CO 2] indicating that source strength was near maximal at current [CO 2]. In contrast, photosynthetic rate and sink development responded strongly to elevated [CO 2] in the perennial, indicating significant source limitation. Sink limitation was avoided in the perennial by high sink plasticity: a marked increase in tillering and root:shoot ratio at elevated [CO 2], and lower non-structural carbohydrate accumulation. Finally, by alleviating sink limitation during vegetative development could be important for maximizing growth of elite cereals under future elevated [CO 2].« less

  17. Estimation of genetic parameters related to eggshell strength using random regression models.

    PubMed

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  18. Correlation analysis of the variation of weld seam and tensile strength in laser welding of galvanized steel

    NASA Astrophysics Data System (ADS)

    Sinha, Amit Kumar; Kim, Duck Young; Ceglarek, Darek

    2013-10-01

    Many advantages of laser welding technology such as high speed and non-contact welding make the use of the technology more attractive in the automotive industry. Many studies have been conducted to search the optimal welding condition experimentally that ensure the joining quality of laser welding that relies both on welding system configuration and welding parameter specification. Both non-destructive and destructive techniques, for example, ultrasonic inspection and tensile test are widely used in practice for estimating the joining quality. Non-destructive techniques are attractive as a rapid quality testing method despite relatively low accuracy. In this paper, we examine the relationship between the variation of weld seam and tensile shear strength in the laser welding of galvanized steel in a lap joint configuration in order to investigate the potential of the variation of weld seam as a joining quality estimator. From the experimental analysis, we identify a trend in between maximum tensile shear strength and the variation of weld seam that clearly supports the fact that laser welded parts having larger variation in the weld seam usually have lower tensile strength. The discovered relationship leads us to conclude that the variation of weld seam can be used as an indirect non-destructive testing method for estimating the tensile strength of the welded parts.

  19. Uncertainty and Intelligence in Computational Stochastic Mechanics

    NASA Technical Reports Server (NTRS)

    Ayyub, Bilal M.

    1996-01-01

    Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should depend on the underlying level of damage and the uncertainties associated with its definition. A mathematical model for structural reliability assessment that includes both ambiguity and vagueness types of uncertainty was suggested to result in the likelihood of failure over a damage spectrum. The resulting structural reliability estimates properly represent the continuous transition from serviceability to strength limit states over the ultimate time exposure of the structure. In this section, a structural reliability assessment method based on a fuzzy definition of failure is suggested to meet these practical needs. A failure definition can be developed to indicate the relationship between failure level and structural response. In this fuzzy model, a subjective index is introduced to represent all levels of damage (or failure). This index can be interpreted as either a measure of failure level or a measure of a degree of belief in the occurrence of some performance condition (e.g., failure). The index allows expressing the transition state between complete survival and complete failure for some structural response based on subjective evaluation and judgment.

  20. Calculation and Analysis of magnetic gradient tensor components of global magnetic models

    NASA Astrophysics Data System (ADS)

    Schiffler, Markus; Queitsch, Matthias; Schneider, Michael; Stolz, Ronny; Krech, Wolfram; Meyer, Hans-Georg; Kukowski, Nina

    2014-05-01

    Magnetic mapping missions like SWARM and its predecessors, e.g. the CHAMP and MAGSAT programs, offer high resolution Earth's magnetic field data. These datasets are usually combined with magnetic observatory and survey data, and subject to harmonic analysis. The derived spherical harmonic coefficients enable magnetic field modelling using a potential series expansion. Recently, new instruments like the JeSSY STAR Full Tensor Magnetic Gradiometry system equipped with very high sensitive sensors can directly measure the magnetic field gradient tensor components. The full understanding of the quality of the measured data requires the extension of magnetic field models to gradient tensor components. In this study, we focus on the extension of the derivation of the magnetic field out of the potential series magnetic field gradient tensor components and apply the new theoretical framework to the International Geomagnetic Reference Field (IGRF) and the High Definition Magnetic Model (HDGM). The gradient tensor component maps for entire Earth's surface produced for the IGRF show low values and smooth variations reflecting the core and mantle contributions whereas those for the HDGM gives a novel tool to unravel crustal structure and deep-situated ore bodies. For example, the Thor Suture and the Sorgenfrei-Thornquist Zone in Europe are delineated by a strong northward gradient. Derived from Eigenvalue decomposition of the magnetic gradient tensor, the scaled magnetic moment, normalized source strength (NSS) and the bearing of the lithospheric sources are presented. The NSS serves as a tool for estimating the lithosphere-asthenosphere boundary as well as the depth of plutons and ore bodies. Furthermore changes in magnetization direction parallel to the mid-ocean ridges can be obtained from the scaled magnetic moment and the normalized source strength discriminates the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European Craton.

  1. Stiffness, strength and adhesion characterization of electrochemically deposited conjugated polymer films

    PubMed Central

    Qu, Jing; Ouyang, Liangqi; Kuo, Chin-chen; Martin, David C.

    2015-01-01

    Conjugated polymers such as poly(3,4-ethylenedioxythiphene) (PEDOT) are of interest for a variety of applications including interfaces between electronic biomedical devices and living tissue. The mechanical properties, strength, and adhesion of these materials to solid substrates are all vital for long-term applications. We have been developing methods to quantify the mechanical properties of conjugated polymer thin films. In this study the stiffness, strength and the interfacial shear strength (adhesion) of electrochemically deposited PEDOT and PEDOT-co-1,3,5-tri[2-(3,4-ethylene dioxythienyl)]-benzene (EPh) were studied. The estimated Young’s modulus of the PEDOT films was 2.6 ± 1.4 GPa, and the strain to failure was around 2%. The tensile strength was measured to be 56 ± 27 MPa. The effective interfacial shear strength was estimated with a shear-lag model by measuring the crack spacing as a function of film thickness. For PEDOT on gold/palladium-coated hydrocarbon film substrates an interfacial shear strength of 0.7 ± 0.3 MPa was determined. The addition of 5 mole% of a tri-functional EDOT crosslinker (EPh) increased the tensile strength of the films to 283 ± 67 MPa, while the strain to failure remained about the same (2%). The effective interfacial shear strength was increased to 2.4 ± 0.6 MPa. PMID:26607768

  2. Evaluation of factors affecting ice forces at selected bridges in South Dakota

    USGS Publications Warehouse

    Niehus, Colin A.

    2002-01-01

    During 1998-2002, the U.S. Geological Survey, in cooperation with the South Dakota Department of Transportation (SDDOT), conducted a study to evaluate factors affecting ice forces at selected bridges in South Dakota. The focus of this ice-force evaluation was on maximum ice thickness and ice-crushing strength, which are the most important variables in the SDDOT bridge-design equations for ice forces in South Dakota. Six sites, the James River at Huron, the James River near Scotland, the White River near Oacoma/Presho, the Grand River at Little Eagle, the Oahe Reservoir near Mobridge, and the Lake Francis Case at the Platte-Winner Bridge, were selected for collection of ice-thickness and ice-crushing-strength data. Ice thickness was measured at the six sites from February 1999 until April 2001. This period is representative of the climate extremes of record in South Dakota because it included both one of the warmest and one of the coldest winters on record. The 2000 and 2001 winters were the 8th warmest and 11th coldest winters, respectively, on record at Sioux Falls, South Dakota, which was used to represent the climate at all bridges in South Dakota. Ice thickness measured at the James River sites at Huron and Scotland during 1999-2001 ranged from 0.7 to 2.3 feet and 0 to 1.7 feet, respectively, and ice thickness measured at the White River near Oacoma/Presho site during 2000-01 ranged from 0.1 to 1.5 feet. At the Grand River at Little Eagle site, ice thickness was measured at 1.2 feet in 1999, ranged from 0.5 to 1.2 feet in 2000, and ranged from 0.2 to 1.4 feet in 2001. Ice thickness measured at the Oahe Reservoir near Mobridge site ranged from 1.7 to 1.8 feet in 1999, 0.9 to 1.2 feet in 2000, and 0 to 2.2 feet in 2001. At the Lake Francis Case at the Platte-Winner Bridge site, ice thickness ranged from 1.2 to 1.8 feet in 2001. Historical ice-thickness data measured by the U.S. Geological Survey (USGS) at eight selected streamflow-gaging stations in South Dakota were compiled for 1970-97. The gaging stations included the Grand River at Little Eagle, the White River near Oacoma, the James River near Scotland, the James River near Yankton, the Vermillion River near Wakonda, the Vermillion River near Vermillion, the Big Sioux River near Brookings, and the Big Sioux River near Dell Rapids. Three ice-thickness-estimation equations that potentially could be used for bridge design in South Dakota were selected and included the Accumulative Freezing Degree Day (AFDD), Incremental Accumulative Freezing Degree Day (IAFDD), and Simplified Energy Budget (SEB) equations. These three equations were evaluated by comparing study-collected and historical ice-thickness measurements to equation-estimated ice thicknesses. Input data required by the equations either were collected or compiled for the study or were obtained from the National Weather Service (NWS). An analysis of the data indicated that the AFDD equation best estimated ice thickness in South Dakota using available data sources with an average variation about the measured value of about 0.4 foot. Maximum potential ice thickness was estimated using the AFDD equation at 19 NWS stations located throughout South Dakota. The 1979 winter (the coldest winter on record at Sioux Falls) was the winter used to estimate the maximum potential ice thickness. The estimated maximum potential ice thicknesses generally are largest in northeastern South Dakota at about 3 feet and are smallest in southwestern and south-central South Dakota at about 2 feet. From 1999 to 2001, ice-crushing strength was measured at the same six sites where ice thickness was measured. Ice-crushing-strength measurements were done both in the middle of the winter and near spring breakup. The maximum ice-crushing strengths were measured in the mid- to late winter before the spring thaw. Measured ice-crushing strengths were much smaller near spring breakup. Ice-crushing strength measured at the six sites

  3. Computer prediction of three-dimensional potential flow fields in which aircraft propellers operate. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Jumper, S. J.

    1982-01-01

    A computer program was developed to calculate the three dimensional, steady, incompressible, inviscid, irrotational flow field at the propeller plane (propeller removed) located upstream of an arbitrary airframe geometry. The program uses a horseshoe vortex of known strength to model the wing. All other airframe surfaces are modeled by a network source panels of unknown strength which is exposed to a uniform free stream and the wing-induced velocity field. By satisfying boundary conditions on each panel (the Neumann problem), relaxed boundary conditions being used on certain panels to simulate inlet inflow, the source strengths are determined. From the known source and wing vortex strengths, the resulting velocity fields on the airframe surface and at the propeller plane are obtained. All program equations are derived in detail, and a brief description of the program structure is presented. A user's manual which fully documents the program is cited. Computer predictions of the flow on the surface of a sphere and at a propeller plane upstream of the sphere are compared with the exact mathematical solutions. Agreement is good, and correct program operation is verified.

  4. Flow Strength of Shocked Aluminum in the Solid-Liquid Mixed Phase Region

    NASA Astrophysics Data System (ADS)

    Reinhart, William

    2011-06-01

    Shock waves have been used to determine material properties under high shock stresses and very-high loading rates. The determination of mechanical properties such as compressive strength under shock compression has proven to be difficult and estimates of strength have been limited to approximately 100 GPa or less in aluminum. The term ``strength'' has been used in different ways. For a Von-Mises solid, the yield strength is equal to twice the shear strength of the material and represents the maximum shear stress that can be supported before yield. Many of these concepts have been applied to materials that undergo high strain-rate dynamic deformation, as in uni-axial strain shock experiments. In shock experiments, it has been observed that the shear stress in the shocked state is not equal to the shear strength, as evidenced by elastic recompressions in reshock experiments. This has led to an assumption that there is a yield surface with maximum (loading)and minimum (unloading), shear strength yet the actual shear stress lies somewhere between these values. This work provides the first simultaneous measurements of unloading velocity and flow strength for transition of solid aluminum to the liquid phase. The investigation describes the flow strength observed in 1100 (pure), 6061-T6, and 2024 aluminum in the solid-liquid mixed phase region. Reloading and unloading techniques were utilized to provide independent data on the two unknowns (τc and τo) , so that the actual critical shear strength and the shear stress at the shock state could be estimated. Three different observations indicate a change in material response for stresses of 100 to 160 GPa; 1) release wave speed (reloading where applicable) measurements, 2) yield strength measurements, and 3) estimates of Poisson's ratio, all of which provide information on the melt process including internal consistency and/or non-equilibrium and rate-dependent melt behavior. The study investigates the strength properties in the solid region and as the material transverses the solid-mixed-liquid regime. Differences observed appear to be the product of alloying and/or microstructural composition of the aluminum. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  5. The Mechanical Strength of Si Foams in the Mushy Zone during Solidification of Al–Si Alloys

    PubMed Central

    Lim, Jeon Taik; Youn, Ji Won; Seo, Seok Yong; Kim, Ki Young; Kim, Suk Jun

    2017-01-01

    The mechanical strength of an Al-30% Si alloy in the mushy zone was estimated by using a novel centrifugation apparatus. In the apparatus, the alloy melt was partially solidified, forming a porous structure made of primary Si platelets (Si foam) while cooling. Subsequently, pressure generated by centrifugal force pushed the liquid phase out of the foam. The estimated mechanical strength of the Si foam in the temperature range 850–993 K was very low (62 kPa to 81 kPa). This is about two orders of magnitude lower than the mechanical strength at room temperature as measured by compressive tests. When the centrifugal stress was higher than the mechanical strength of the foam, the foam fractured, and the primary Si crystallites were extracted along with the Al-rich melt. Therefore, to maximize the centrifugal separation efficiency of the Al-30% Si alloy, the centrifugal stress should be in the range of 62–81 kPa. PMID:28772695

  6. Borrowing of strength and study weights in multivariate and network meta-analysis

    PubMed Central

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2016-01-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of ‘borrowing of strength’. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis). PMID:26546254

  7. THE MULTI-WAVELENGTH EXTREME STARBURST SAMPLE OF LUMINOUS GALAXIES. I. SAMPLE CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laag, Edward; Croft, Steve; Canalizo, Gabriela

    2010-12-15

    This paper introduces the Multi-wavelength Extreme Starburst Sample (MESS), a new catalog of 138 star-forming galaxies (0.1 < z < 0.3) optically selected from the Sloan Digital Sky Survey using emission line strength diagnostics to have a high absolute star formation rate (SFR; minimum 11 M{sub sun} yr{sup -1} with median SFR {approx} 61 M{sub sun} yr{sup -1} based on a Kroupa initial mass function). The MESS was designed to complement samples of nearby star-forming galaxies such as the luminous infrared galaxies (LIRGs) and ultraviolet luminous galaxies (UVLGs). Observations using the Multi-band Imaging Photometer (24, 70, and 160 {mu}m channels)more » on the Spitzer Space Telescope indicate that the MESS galaxies have IR luminosities similar to those of LIRGs, with an estimated median L{sub TIR} {approx} 3 x 10{sup 11} L{sub sun}. The selection criteria for the MESS objects suggest they may be less obscured than typical far-IR-selected galaxies with similar estimated SFRs. Twenty out of 70 of the MESS objects detected in the Galaxy Evolution Explorer FUV band also appear to be UVLGs. We estimate the SFRs based directly on luminosities to determine the agreement for these methods in the MESS. We compare these estimates to the emission line strength technique, since the effective measurement of dust attenuation plays a central role in these methods. We apply an image stacking technique to the Very Large Array FIRST survey radio data to retrieve 1.4 GHz luminosity information for 3/4 of the sample covered by FIRST including sources too faint, and at too high a redshift, to be detected in FIRST. We also discuss the relationship between the MESS objects and samples selected through alternative criteria. Morphologies will be the subject of a forthcoming paper.« less

  8. Probing the Physical Properties of High Redshift Optically Obscured Galaxies in the Bootes NOAO Deep Wide Field Survey using the Infrared Spectrograph on Spitzer

    NASA Astrophysics Data System (ADS)

    Higdon, S. J. U.; Weedman, D.; Higdon, J. L.; Houck, J. R.; Soifer, B. T.; Armus, L.; Charmandaris, V.; Herter, T. L.; Brandl, B. R.; Brown, M. J. I.; Dey, A.; Jannuzi, B.; Le Floc'h, E.; Rieke, M.

    2004-12-01

    We have surveyed a field covering 8.4 degrees2 within the NOAO Deep Wide Field Survey region in Boötes with the Multiband Imaging Photometer on the Spitzer Space Telescope to a limiting 24 um flux density of 0.3 mJy, identifying ˜ 22,000 point sources. Thirty one sources from this survey with F(24 um) > 0.75 mJy , which are optically ``invisible'' (R > 26) or very faint (I > 24) have been observed with the low-resolution modules of the Infrared Spectrograph on SST. The spectra were extracted using the IRS SMART spectral analysis package in order to optimize their signal to noise. A suite of mid-IR spectral templates of well known galaxies, observed as part of the IRS GTO program, is used to perform formal fits to the spectral energy distribution of the Boötes sources. These fits enable us to measure their redshift, to calculate the depth of the 9.7 um silicate feature along with the strength of 7.7 um PAH, as well as to estimate their bolometric luminosities. We compare the mid-IR slope, the measured PAH luminosity, and the optical depth of these sources with those of galaxies in the local Universe. As a result we are able to estimate the contribution of a dust enshrouded active nucleus to the mid-IR and bolometric luminosity of these systems. This work is based [in part] on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. Support for this work was provided by NASA through Contract Number 1257184 issued by JPL/Caltech.

  9. Magnetic inhibition of convection and the fundamental properties of low-mass stars. I. Stars with a radiative core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feiden, Gregory A.; Chaboyer, Brian, E-mail: gregory.a.feiden.gr@dartmouth.edu, E-mail: brian.chaboyer@dartmouth.edu

    2013-12-20

    Magnetic fields are hypothesized to inflate the radii of low-mass stars—defined as less massive than 0.8 M {sub ☉}—in detached eclipsing binaries (DEBs). We investigate this hypothesis using the recently introduced magnetic Dartmouth stellar evolution code. In particular, we focus on stars thought to have a radiative core and convective outer envelope by studying in detail three individual DEBs: UV Psc, YY Gem, and CU Cnc. Our results suggest that the stabilization of thermal convection by a magnetic field is a plausible explanation for the observed model-radius discrepancies. However, surface magnetic field strengths required by the models are significantly strongermore » than those estimated from observed coronal X-ray emission. Agreement between model predicted surface magnetic field strengths and those inferred from X-ray observations can be found by assuming that the magnetic field sources its energy from convection. This approach makes the transport of heat by convection less efficient and is akin to reduced convective mixing length methods used in other studies. Predictions for the metallicity and magnetic field strengths of the aforementioned systems are reported. We also develop an expression relating a reduction in the convective mixing length to a magnetic field strength in units of the equipartition value. Our results are compared with those from previous investigations to incorporate magnetic fields to explain the low-mass DEB radius inflation. Finally, we explore how the effects of magnetic fields might affect mass determinations using asteroseismic data and the implication of magnetic fields on exoplanet studies.« less

  10. The Effect of Alkaline Activator Ratio on the Compressive Strength of Fly Ash-Based Geopolymer Paste

    NASA Astrophysics Data System (ADS)

    Lăzărescu, A. V.; Szilagyi, H.; Baeră, C.; Ioani, A.

    2017-06-01

    Alkaline activation of fly ash is a particular procedure in which ash resulting from a power plant combined with a specific alkaline activator creates a solid material when dried at a certain temperature. In order to obtain desirable compressive strengths, the mix design of fly ash based geopolymer pastes should be explored comprehensively. To determine the preliminary compressive strength for fly ash based geopolymer paste using Romanian material source, various ratios of Na2SiO3 solution/ NaOH solution were produced, keeping the fly ash/alkaline activator ratio constant. All the mixes were then cured at 70 °C for 24 hours and tested at 2 and 7 days, respectively. The aim of this paper is to present the preliminary compressive strength results for producing fly ash based geopolymer paste using Romanian material sources, the effect of alkaline activators ratio on the compressive strength and studying the directions for future research.

  11. I/O values for determination of the origin of some indoor organic pollutants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otson, R.; Zhu, J.

    To reduce human health risks resulting from exposure to toxic chemicals, it is important to determine the origin of such substances. The ratio (I/O) of indoor to outdoor concentrations of selected airborne vapor phase organic compounds (VPOC) was used to estimate the contribution of indoor sources to levels of the compounds in the air of 44 homes selected randomly in the Greater Toronto Area (GTA). Average I/O values for all of the homes were greater 1.5 for 10 of the 20 detected target compounds, and it could be concluded that indoor VPOC sources had a greater impact on indoor airmore » quality than outdoor air in these instances. A significant finding, which aptly demonstrates the importance of indoor sources and pollution, was the overall I/O value of 5.2 for the 44 representative GTA homes. Possible indoor sources for most of the 10 compounds could be identified, based on information collected by means of a questionnaire, as well as from the scientific literature. However, possible sources for some compounds could not be determined as readily, probably because of the presence of multiple sources, and sources which had not been previously noted, such as foods and beverages. The sensitivity of I/O values to various factors (e.g., source strength, air exchange rates, precision of measurements, unanticipated sources), and the reliability of determining the origin of pollutants by use of I/O values alone were examined, with some examples. If used judiciously, the I/O value can be a useful tool for IAQ investigations.« less

  12. The experimental design approach to eluotropic strength of 20 solvents in thin-layer chromatography on silica gel.

    PubMed

    Komsta, Łukasz; Stępkowska, Barbara; Skibiński, Robert

    2017-02-03

    The eluotropic strength on thin-layer silica plates was investigated for 20 chromatographic grade solvents available in current market. 35 model compounds were used as test subjects in the investigation. The use of modern mixture screening design allowed to estimate each solvent as a separate elution coefficient with an acceptable error of estimation (0.0913 of R M value). Additional bootstrapping technique was used to check the distribution and uncertainty of eluotropic estimates, proving very similar confidence intervals to linear regression. Principal component analysis proved that the only one parameter (mean eluotropic strength) is satisfactory to describe the solvent property, as it explains almost 90% of variance of retention. The obtained eluotropic data can be good appendix to earlier published results and their values can be interpreted in context of R M differences. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The experimental design approach to eluotropic strength of 20 solvents in thin-layer chromatography on silica gel.

    PubMed

    Komsta, Łukasz; Stępkowska, Barbara; Skibiński, Robert

    2017-01-04

    The eluotropic strength on thin-layer silica plates was investigated for 20 chromatographic grade solvents available in current market. 35 model compounds were used as test subjects in the investigation. The use of modern mixture screening design allowed to estimate each solvent as a separate elution coefficient with an acceptable error of estimation (0.0913 of R M value). Additional bootstrapping technique was used to check the distribution and uncertainty of eluotropic estimates, proving very similar confidence intervals to linear regression. Principal component analysis proved that the only one parameter (mean eluotropic strength) is satisfactory to describe the solvent property, as it explains almost 90% of variance of retention. The obtained eluotropic data can be good appendix to earlier published results and their values can be interpreted in context of R M differences. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Revisiting the contribution of land transport and shipping emissions to tropospheric ozone

    NASA Astrophysics Data System (ADS)

    Mertens, Mariano; Grewe, Volker; Rieger, Vanessa S.; Jöckel, Patrick

    2018-04-01

    We quantify the contribution of land transport and shipping emissions to tropospheric ozone for the first time with a chemistry-climate model including an advanced tagging method (also known as source apportionment), which considers not only the emissions of nitrogen oxides (NOx, NO, and NO2), carbon monoxide (CO), and volatile organic compounds (VOC) separately, but also their non-linear interaction in producing ozone. For summer conditions a contribution of land transport emissions to ground-level ozone of up to 18 % in North America and Southern Europe is estimated, which corresponds to 12 and 10 nmol mol-1, respectively. The simulation results indicate a contribution of shipping emissions to ground-level ozone during summer on the order of up to 30 % in the North Pacific Ocean (up to 12 nmol mol-1) and 20 % in the North Atlantic Ocean (12 nmol mol-1). With respect to the contribution to the tropospheric ozone burden, we quantified values of 8 and 6 % for land transport and shipping emissions, respectively. Overall, the emissions from land transport contribute around 20 % to the net ozone production near the source regions, while shipping emissions contribute up to 52 % to the net ozone production in the North Pacific Ocean. To put these estimates in the context of literature values, we review previous studies. Most of them used the perturbation approach, in which the results for two simulations, one with all emissions and one with changed emissions for the source of interest, are compared. For a better comparability with these studies, we also performed additional perturbation simulations, which allow for a consistent comparison of results using the perturbation and the tagging approach. The comparison shows that the results strongly depend on the chosen methodology (tagging or perturbation approach) and on the strength of the perturbation. A more in-depth analysis for the land transport emissions reveals that the two approaches give different results, particularly in regions with large emissions (up to a factor of 4 for Europe). Our estimates of the ozone radiative forcing due to land transport and shipping emissions are, based on the tagging method, 92 and 62 mW m-2, respectively. Compared to our best estimates, previously reported values using the perturbation approach are almost a factor of 2 lower, while previous estimates using NOx-only tagging are almost a factor of 2 larger. Overall our results highlight the importance of differentiating between the perturbation and the tagging approach, as they answer two different questions. In line with previous studies, we argue that only the tagging approach (or source apportionment approaches in general) can estimate the contribution of emissions, which is important to attribute emission sources to climate change and/or extreme ozone events. The perturbation approach, however, is important to investigate the effect of an emission change. To effectively assess mitigation options, both approaches should be combined. This combination allows us to track changes in the ozone production efficiency of emissions from sources which are not mitigated and shows how the ozone share caused by these unmitigated emission sources subsequently increases.

  15. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    NASA Astrophysics Data System (ADS)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.

  16. Theoretical prediction of thick wing and pylon-fuselage-fanpod-nacelle aerodynamic characteristics at subcritical speeds. Part 1: Theory and results

    NASA Technical Reports Server (NTRS)

    Tulinius, J. R.

    1974-01-01

    The theoretical development and the comparison of results with data of a thick wing and pylon-fuselage-fanpod-nacelle analysis are presented. The analysis utilizes potential flow theory to compute the surface velocities and pressures, section lift and center of pressure, and the total configuration lift, moment, and vortex drag. The skin friction drag is also estimated in the analysis. The perturbation velocities induced by the wing and pylon, fuselage and fanpod, and nacelle are represented by source and vortex lattices, quadrilateral vortices, and source frustums, respectively. The strengths of these singularities are solved for simultaneously including all interference effects. The wing and pylon planforms, twists, cambers, and thickness distributions, and the fuselage and fanpod geometries can be arbitrary in shape, provided the surface gradients are smooth. The flow through nacelle is assumed to be axisymmetric. An axisymmetric center engine hub can also be included. The pylon and nacelle can be attached to the wing, fuselage, or fanpod.

  17. Detecting axion stars with radio telescopes

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Hamada, Yuta

    2018-06-01

    When axion stars fly through an astrophysical magnetic background, the axion-to-photon conversion may generate a large electromagnetic radiation power. After including the interference effects of the spacially-extended axion-star source and the macroscopic medium effects, we estimate the radiation power when an axion star meets a neutron star. For a dense axion star with 10-13M⊙, the radiated power is at the order of 1011W ×(100 μeV /ma) 4(B /1010Gauss) 2 with ma as the axion particle mass and B the strength of the neutron star magnetic field. For axion stars occupy a large fraction of dark matter energy density, this encounter event with a transient O (0.1s) radio signal may happen in our galaxy with the averaged source distance of one kiloparsec. The predicted spectral flux density is at the order of μJy for a neutron star with B ∼1013 Gauss. The existing Arecibo, GBT, JVLA and FAST and the ongoing SKA radio telescopes have excellent discovery potential of dense axion stars.

  18. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  19. Cerebello-cortical network fingerprints differ between essential, Parkinson's and mimicked tremors.

    PubMed

    Muthuraman, Muthuraman; Raethjen, Jan; Koirala, Nabin; Anwar, Abdul Rauf; Mideksa, Kidist G; Elble, Rodger; Groppa, Sergiu; Deuschl, Günter

    2018-06-01

    Cerebello-thalamo-cortical loops play a major role in the emergence of pathological tremors and voluntary rhythmic movements. It is unclear whether these loops differ anatomically or functionally in different types of tremor. We compared age- and sex-matched groups of patients with Parkinson's disease or essential tremor and healthy controls (n = 34 per group). High-density 256-channel EEG and multi-channel EMG from extensor and flexor muscles of both wrists were recorded simultaneously while extending the hands against gravity with the forearms supported. Tremor was thereby recorded from patients, and voluntarily mimicked tremor was recorded from healthy controls. Tomographic maps of EEG-EMG coherence were constructed using a beamformer algorithm coherent source analysis. The direction and strength of information flow between different coherent sources were estimated using time-resolved partial-directed coherence analyses. Tremor severity and motor performance measures were correlated with connection strengths between coherent sources. The topography of oscillatory coherent sources in the cerebellum differed significantly among the three groups, but the cortical sources in the primary sensorimotor region and premotor cortex were not significantly different. The cerebellar and cortical source combinations matched well with known cerebello-thalamo-cortical connections derived from functional MRI resting state analyses according to the Buckner-atlas. The cerebellar sources for Parkinson's tremor and essential tremor mapped primarily to primary sensorimotor cortex, but the cerebellar source for mimicked tremor mapped primarily to premotor cortex. Time-resolved partial-directed coherence analyses revealed activity flow mainly from cerebellum to sensorimotor cortex in Parkinson's tremor and essential tremor and mainly from cerebral cortex to cerebellum in mimicked tremor. EMG oscillation flowed mainly to the cerebellum in mimicked tremor, but oscillation flowed mainly from the cerebellum to EMG in Parkinson's and essential tremor. The topography of cerebellar involvement differed among Parkinson's, essential and mimicked tremors, suggesting different cerebellar mechanisms in tremorogenesis. Indistinguishable areas of sensorimotor cortex and premotor cerebral cortex were involved in all three tremors. Information flow analyses suggest that sensory feedback and cortical efferent copy input to cerebellum are needed to produce mimicked tremor, but tremor in Parkinson's disease and essential tremor do not depend on these mechanisms. Despite the subtle differences in cerebellar source topography, we found no evidence that the cerebellum is the source of oscillation in essential tremor or that the cortico-bulbo-cerebello-thalamocortical loop plays different tremorogenic roles in Parkinson's and essential tremor. Additional studies are needed to decipher the seemingly subtle differences in cerebellocortical function in Parkinson's and essential tremors.

  20. 75 FR 38594 - Buy America Waiver Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ... not able to find a domestic source for the high strength steel bars ASTM A722M 150 ksi (1\\7/8\\ inches... concludes that a public interest waiver is appropriate for the use of non-domestic high strength steel bars... appropriate to use non- domestic high strength steel bars based on the public interest provision in FHWA's...

  1. Recent Approaches to Estimate Associations Between Source-Specific Air Pollution and Health.

    PubMed

    Krall, Jenna R; Strickland, Matthew J

    2017-03-01

    Estimating health effects associated with source-specific exposure is important for better understanding how pollution impacts health and for developing policies to better protect public health. Although epidemiologic studies of sources can be informative, these studies are challenging to conduct because source-specific exposures (e.g., particulate matter from vehicles) often are not directly observed and must be estimated. We reviewed recent studies that estimated associations between pollution sources and health to identify methodological developments designed to address important challenges. Notable advances in epidemiologic studies of sources include approaches for (1) propagating uncertainty in source estimation into health effect estimates, (2) assessing regional and seasonal variability in emissions sources and source-specific health effects, and (3) addressing potential confounding in estimated health effects. Novel methodological approaches to address challenges in studies of pollution sources, particularly evaluation of source-specific health effects, are important for determining how source-specific exposure impacts health.

  2. A Resonantly Excited Disk-Oscillation Model of High-Frequency QPOs of Microquasars

    NASA Astrophysics Data System (ADS)

    Kato, Shoji

    2012-12-01

    A possible model of twin high-frequency QPOs (HF QPOs) of microquasars is examined. The disk is assumed to have global magnetic fields and to be deformed with a two-armed pattern. In this deformed disk, a set of a two-armed (m = 2) vertical p-mode oscillation and an axisymmetric (m = 0) g-mode oscillation is considered. They resonantly interact through the disk deformation when their frequencies are the same. This resonant interaction amplifies the set of the above oscillations in the case where these two oscillations have wave energies of opposite signs. These oscillations are assumed to be excited most efficiently in the case where the radial group velocities of these two waves vanish at the same place. The above set of oscillations is not unique, depending on the node number n, of oscillations in the vertical direction. We consider that the basic two sets of oscillations correspond to the twin QPOs. The frequencies of these oscillations depend on the disk parameters, such as the strength of the magnetic fields. For observational mass ranges of GRS 1915+ 105, GRO J1655-40, XTE J1550-564, and HEAO H1743-322, the spins of these sources are estimated. High spins of these sources can be described if the disks have weak poloidal magnetic fields as well as toroidal magnetic fields of moderate strength. In this model the 3:2 frequency ratio of high-frequency QPOs is not related to their excitation, but occurs by chance.

  3. Ab initio LDA+U prediction of the tensile properties of chromia across multiple length scales

    NASA Astrophysics Data System (ADS)

    Mosey, Nicholas J.; Carter, Emily A.

    2009-02-01

    Periodic density functional theory (DFT) and DFT+U calculations are used to evaluate various mechanical properties associated with the fracture of chromia (Cr 2O 3) along the [0 0 0 1] and [0 1 1¯ (3/2) (a/c)2 2] directions. The properties investigated include the tensile strength, elastic constants, and surface energies. The tensile strengths are evaluated using an ideal tensile test, which provides the theoretical tensile strength, and by fitting the calculated data to universal binding energy relationships (UBER), which permit the extrapolation of the calculated results to arbitrary length scales. The results demonstrate the ability of the UBER to yield a realistic estimate of the tensile strength of a 10-μm-thick sample of Cr 2O 3 using data obtained through calculations on nanoscopic systems. We predict that Cr 2O 3 will fracture most easily in the [0 1 1¯ (3/2) (a/c)2 2] direction, with a best estimate for the tensile strength of 386 MPa for a 10 μm grain, consistent with flexural strength measurements for chromia. The grain becomes considerably stronger at the nanoscale, where we predict a tensile strength along the same direction of 32.1 GPa for 1.45 nm crystallite. The results also provide insight into the origin of the direction dependence of the mechanical properties of Cr 2O 3, with the differences in the behavior along different directions being related to the number of Cr-O bonds supporting the applied tensile load. Additionally, the results shed light on various practical aspects of modeling the mechanical properties of materials with DFT+U calculations and in using UBERs to estimate the mechanical properties of materials across disparate length scales.

  4. Comparison of air-kerma strength determinations for HDR (192)Ir sources.

    PubMed

    Rasmussen, Brian E; Davis, Stephen D; Schmidt, Cal R; Micka, John A; Dewerd, Larry A

    2011-12-01

    To perform a comparison of the interim air-kerma strength standard for high dose rate (HDR) (192)Ir brachytherapy sources maintained by the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) with measurements of the various source models using modified techniques from the literature. The current interim standard was established by Goetsch et al. in 1991 and has remained unchanged to date. The improved, laser-aligned seven-distance apparatus of the University of Wisconsin Medical Radiation Research Center (UWMRRC) was used to perform air-kerma strength measurements of five different HDR (192)Ir source models. The results of these measurements were compared with those from well chambers traceable to the original standard. Alternative methodologies for interpolating the (192)Ir air-kerma calibration coefficient from the NIST air-kerma standards at (137)Cs and 250 kVp x rays (M250) were investigated and intercompared. As part of the interpolation method comparison, the Monte Carlo code EGSnrc was used to calculate updated values of A(wall) for the Exradin A3 chamber used for air-kerma strength measurements. The effects of air attenuation and scatter, room scatter, as well as the solution method were investigated in detail. The average measurements when using the inverse N(K) interpolation method for the Classic Nucletron, Nucletron microSelectron, VariSource VS2000, GammaMed Plus, and Flexisource were found to be 0.47%, -0.10%, -1.13%, -0.20%, and 0.89% different than the existing standard, respectively. A further investigation of the differences observed between the sources was performed using MCNP5 Monte Carlo simulations of each source model inside a full model of an HDR 1000 Plus well chamber. Although the differences between the source models were found to be statistically significant, the equally weighted average difference between the seven-distance measurements and the well chambers was 0.01%, confirming that it is not necessary to update the current standard maintained at the UWADCL.

  5. The Leeb Hardness Test for Rock: An Updated Methodology and UCS Correlation

    NASA Astrophysics Data System (ADS)

    Corkum, A. G.; Asiri, Y.; El Naggar, H.; Kinakin, D.

    2018-03-01

    The Leeb hardness test (LHT with test value of L D ) is a rebound hardness test, originally developed for metals, that has been correlated with the Unconfined Compressive Strength (test value of σ c ) of rock by several authors. The tests can be carried out rapidly, conveniently and nondestructively on core and block samples or on rock outcrops. This makes the relatively small LHT device convenient for field tests. The present study compiles test data from literature sources and presents new laboratory testing carried out by the authors to develop a substantially expanded database with wide-ranging rock types. In addition, the number of impacts that should be averaged to comprise a "test result" was revisited along with the issue of test specimen size. Correlation for L D and σ c for various rock types is provided along with recommended testing methodology. The accuracy of correlated σ c estimates was assessed and reasonable correlations were observed between L D and σ c . The study findings show that LHT can be useful particularly for field estimation of σ c and offers a significant improvement over the conventional field estimation methods outlined by the ISRM (e.g., hammer blows). This test is rapid and simple, with relatively low equipment costs, and provides a reasonably accurate estimate of σ c .

  6. “Attacks” or “Whistling”: Impact of Questionnaire Wording on Wheeze Prevalence Estimates

    PubMed Central

    Pescatore, Anina M.; Spycher, Ben D.; Beardsmore, Caroline S.; Kuehni, Claudia E.

    2015-01-01

    Background Estimates of prevalence of wheeze depend on questionnaires. However, wording of questions may vary between studies. We investigated effects of alternative wording on estimates of prevalence and severity of wheeze, and associations with risk factors. Methods White and South Asian children from a population-based cohort (UK) were randomly assigned to two groups and followed up at one, four and six years (1998, 2001, 2003). Parents were asked either if their child ever had “attacks of wheeze” (attack group, N=535), or “wheezing or whistling in the chest” (whistling group, N=2859). All other study aspects were identical, including questions about other respiratory symptoms. Results Prevalence of wheeze ever was lower in the attack group than in the whistling group for all surveys (32 vs. 40% in white children aged one year, p<0.001). Prevalence of other respiratory symptoms did not differ between groups. Wheeze tended to be more severe in the attack group. The strength of association with risk factors was comparable in the two groups. Conclusions The wording of questions on wheeze can affect estimates of prevalence, but has less impact on measured associations with risk factors. Question wording is a potential source of between-study-heterogeneity in meta-analyses. PMID:26114296

  7. Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2018-06-01

    In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.

  8. Bone strength and muscle properties in postmenopausal women with and without a recent distal radius fracture.

    PubMed

    Crockett, K; Arnold, C M; Farthing, J P; Chilibeck, P D; Johnston, J D; Bath, B; Baxter-Jones, A D G; Kontulainen, S A

    2015-10-01

    Distal radius (wrist) fracture (DRF) in women over age 50 years is an early sign of bone fragility. Women with a recent DRF compared to women without DRF demonstrated lower bone strength, muscle density, and strength, but no difference in dual-energy x-ray absorptiometry (DXA) measures, suggesting DXA alone may not be a sufficient predictor for DRF risk. The objective of this study was to investigate differences in bone and muscle properties between women with and without a recent DRF. One hundred sixty-six postmenopausal women (50-78 years) were recruited. Participants were excluded if they had taken bone-altering medications in the past 6 months or had medical conditions that severely affected daily living or the upper extremity. Seventy-seven age-matched women with a fracture in the past 6-24 months (Fx, n = 32) and without fracture (NFx, n = 45) were measured for bone and muscle properties using the nondominant (NFx) or non-fractured limb (Fx). Peripheral quantitative computed tomography (pQCT) was used to estimate bone strength in compression (BSIc) at the distal radius and tibia, bone strength in torsion (SSIp) at the shaft sites, muscle density, and area at the forearm and lower leg. Areal bone mineral density at the ultradistal forearm, spine, and femoral neck was measured by DXA. Grip strength and the 30-s chair stand test were used as estimates of upper and lower extremity muscle strength. Limb-specific between-group differences were compared using multivariate analysis of variance (MANOVA). There was a significant group difference (p < 0.05) for the forearm and lower leg, with the Fx group demonstrating 16 and 19% lower BSIc, 3 and 6% lower muscle density, and 20 and 21% lower muscle strength at the upper and lower extremities, respectively. There were no differences between groups for DXA measures. Women with recent DRF had lower pQCT-derived estimated bone strength at the distal radius and tibia and lower muscle density and strength at both extremities.

  9. Detecting black bear source–sink dynamics using individual-based genetic graphs

    PubMed Central

    Draheim, Hope M.; Moore, Jennifer A.; Etter, Dwayne; Winterstein, Scott R.; Scribner, Kim T.

    2016-01-01

    Source–sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source–sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source–sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source–sink dynamics and their implications on harvest management of game species. PMID:27440668

  10. Phenology of Scramble Polygyny in a Wild Population of Chrysolemid Beetles: The Opportunity for and the Strength of Sexual Selection

    PubMed Central

    Baena, Martha Lucía; Macías-Ordóñez, Rogelio

    2012-01-01

    Recent debate has highlighted the importance of estimating both the strength of sexual selection on phenotypic traits, and the opportunity for sexual selection. We describe seasonal fluctuations in mating dynamics of Leptinotarsa undecimlineata (Coleoptera: Chrysomelidae). We compared several estimates of the opportunity for, and the strength of, sexual selection and male precopulatory competition over the reproductive season. First, using a null model, we suggest that the ratio between observed values of the opportunity for sexual selections and their expected value under random mating results in unbiased estimates of the actual nonrandom mating behavior of the population. Second, we found that estimates for the whole reproductive season often misrepresent the actual value at any given time period. Third, mating differentials on male size and mobility, frequency of male fighting and three estimates of the opportunity for sexual selection provide contrasting but complementary information. More intense sexual selection associated to male mobility, but not to male size, was observed in periods with high opportunity for sexual selection and high frequency of male fights. Fourth, based on parameters of spatial and temporal aggregation of female receptivity, we describe the mating system of L. undecimlineata as a scramble mating polygyny in which the opportunity for sexual selection varies widely throughout the season, but the strength of sexual selection on male size remains fairly weak, while male mobility inversely covaries with mating success. We suggest that different estimates for the opportunity for, and intensity of, sexual selection should be applied in order to discriminate how different behavioral and demographic factors shape the reproductive dynamic of populations. PMID:22761675

  11. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  12. Comparison of different strongman events: trunk muscle activation and lumbar spine motion, load, and stiffness.

    PubMed

    McGill, Stuart M; McDermott, Art; Fenwick, Chad Mj

    2009-07-01

    Strongman events are attracting more interest as training exercises because of their unique demands. Further, strongman competitors sustain specific injuries, particularly to the back. Muscle electromyographic data from various torso and hip muscles, together with kinematic measures, were input to an anatomically detailed model of the torso to estimate back load, low-back stiffness, and hip torque. Events included the farmer's walk, super yoke, Atlas stone lift, suitcase carry, keg walk, tire flip, and log lift. The results document the unique demands of these whole-body events and, in particular, the demands on the back and torso. For example, the very large moments required at the hip for abduction when performing a yoke walk exceed the strength capability of the hip. Here, muscles such as quadratus lumborum made up for the strength deficit by generating frontal plane torque to support the torso/pelvis. In this way, the stiffened torso acts as a source of strength to allow joints with insufficient strength to be buttressed, resulting in successful performance. Timing of muscle activation patterns in events such as the Atlas stone lift demonstrated the need to integrate the hip extensors before the back extensors. Even so, because of the awkward shape of the stone, the protective neutral spine posture was impossible to achieve, resulting in substantial loading on the back that is placed in a weakened posture. Unexpectedly, the super yoke carry resulted in the highest loads on the spine. This was attributed to the weight of the yoke coupled with the massive torso muscle cocontraction, which produced torso stiffness to ensure spine stability together with buttressing the abduction strength insufficiency of the hips. Strongman events clearly challenge the strength of the body linkage, together with the stabilizing system, in a different way than traditional approaches. The carrying events challenged different abilities than the lifting events, suggesting that loaded carrying would enhance traditional lifting-based strength programs. This analysis also documented the technique components of successful, joint-sparing, strongman event strategies.

  13. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  14. Improving Non-Destructive Concrete Strength Tests Using Support Vector Machines

    PubMed Central

    Shih, Yi-Fan; Wang, Yu-Ren; Lin, Kuo-Liang; Chen, Chin-Wen

    2015-01-01

    Non-destructive testing (NDT) methods are important alternatives when destructive tests are not feasible to examine the in situ concrete properties without damaging the structure. The rebound hammer test and the ultrasonic pulse velocity test are two popular NDT methods to examine the properties of concrete. The rebound of the hammer depends on the hardness of the test specimen and ultrasonic pulse travelling speed is related to density, uniformity, and homogeneity of the specimen. Both of these two methods have been adopted to estimate the concrete compressive strength. Statistical analysis has been implemented to establish the relationship between hammer rebound values/ultrasonic pulse velocities and concrete compressive strength. However, the estimated results can be unreliable. As a result, this research proposes an Artificial Intelligence model using support vector machines (SVMs) for the estimation. Data from 95 cylinder concrete samples are collected to develop and validate the model. The results show that combined NDT methods (also known as SonReb method) yield better estimations than single NDT methods. The results also show that the SVMs model is more accurate than the statistical regression model. PMID:28793627

  15. A new EEG synchronization strength analysis method: S-estimator based normalized weighted-permutation mutual information.

    PubMed

    Cui, Dong; Pu, Weiting; Liu, Jing; Bian, Zhijie; Li, Qiuli; Wang, Lei; Gu, Guanghua

    2016-10-01

    Synchronization is an important mechanism for understanding information processing in normal or abnormal brains. In this paper, we propose a new method called normalized weighted-permutation mutual information (NWPMI) for double variable signal synchronization analysis and combine NWPMI with S-estimator measure to generate a new method named S-estimator based normalized weighted-permutation mutual information (SNWPMI) for analyzing multi-channel electroencephalographic (EEG) synchronization strength. The performances including the effects of time delay, embedding dimension, coupling coefficients, signal to noise ratios (SNRs) and data length of the NWPMI are evaluated by using Coupled Henon mapping model. The results show that the NWPMI is superior in describing the synchronization compared with the normalized permutation mutual information (NPMI). Furthermore, the proposed SNWPMI method is applied to analyze scalp EEG data from 26 amnestic mild cognitive impairment (aMCI) subjects and 20 age-matched controls with normal cognitive function, who both suffer from type 2 diabetes mellitus (T2DM). The proposed methods NWPMI and SNWPMI are suggested to be an effective index to estimate the synchronization strength. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Comparison of the hypothetical 57Co brachytherapy source with the 192Ir source

    PubMed Central

    Toossi, Mohammad Taghi Bahreyni; Rostami, Atefeh; Khosroabadi, Mohsen; Khademi, Sara; Knaup, Courtney

    2016-01-01

    Aim of the study The 57Co radioisotope has recently been proposed as a hypothetical brachytherapy source due to its high specific activity, appropriate half-life (272 days) and medium energy photons (114.17 keV on average). In this study, Task Group No. 43 dosimetric parameters were calculated and reported for a hypothetical 57Co source. Material and methods A hypothetical 57Co source was simulated in MCNPX, consisting of an active cylinder with 3.5 mm length and 0.6 mm radius encapsulated in a stainless steel capsule. Three photon energies were utilized (136 keV [10.68%], 122 keV [85.60%], 14 keV [9.16%]) for the 57Co source. Air kerma strength, dose rate constant, radial dose function, anisotropy function, and isodose curves for the source were calculated and compared to the corresponding data for a 192Ir source. Results The results are presented as tables and figures. Air kerma strength per 1 mCi activity for the 57Co source was 0.46 cGyh–1 cm 2 mCi–1. The dose rate constant for the 57Co source was determined to be 1.215 cGyh–1U–1. The radial dose function for the 57Co source has an increasing trend due to multiple scattering of low energy photons. The anisotropy function for the 57Co source at various distances from the source is more isotropic than the 192Ir source. Conclusions The 57Co source has advantages over 192Ir due to its lower energy photons, longer half-life, higher dose rate constant and more isotropic anisotropic function. However, the 192Ir source has a higher initial air kerma strength and more uniform radial dose function. These properties make 57Co a suitable source for use in brachytherapy applications. PMID:27688731

  17. Atmospheric deposition having been one of the major source of Pb in Jiaozhou Bay

    NASA Astrophysics Data System (ADS)

    Yang, Dongfang; Miao, Zhenqing; Zhang, Xiaolong; Wang, Qi; Li, Haixia

    2018-03-01

    Many marine bays have been polluted by Pb due to the rapid development of industry, and identifying the major source of Pb is essential to pollution control. This paper analyzed the distribution and pollution source of Pb in Jiaozhou Bay in 1988. Results showed that Pb contents in surface waters in Jiaozhou Bay in April, July and October 1988 were 5.52-24.61 μg L‑1, 7.66-38.62 μg L‑1 and 6.89-19.30 μg L‑1, respectively. The major Pb sources in this bay were atmospheric deposition, and marine current, whose source strengths were 19.30-24.61μg L‑1 and 38.62 μg L‑1, respectively. Atmospheric deposition had been one of the major Pb sources in Jiaozhou Bay, and the source strengths were stable and strong. The pollution level of Pb in this bay in 1988 was moderate to heavy, and the source control measurements were necessary.

  18. REVIEWS OF TOPICAL PROBLEMS: Gravitational wave astronomy: in anticipation of first sources to be detected

    NASA Astrophysics Data System (ADS)

    Grishchuk, Leonid P.; Lipunov, V. M.; Postnov, Konstantin A.; Prokhorov, Mikhail E.; Sathyaprakash, B. S.

    2001-01-01

    The first generation of long-baseline laser interferometric detectors of gravitational waves will start collecting data in 2001 - 2003. We carefully analyse their planned performance and compare it with the expected strengths of astrophysical sources. The scientific importance of the anticipated discovery of various gravitational wave signals and the reliability of theoretical predictions are taken into account in our analysis. We try to be conservative in evaluating both the theoretical uncertainties in the parameters of the source and the prospects of its detection. Upon considering many possible sources, we place our emphasis on (i) inspiraling binaries consisting of stellar mass black holes and (ii) relic gravitational waves. We conclude that inspiraling binary black holes are likely to be detected by the early ground-based interferometers first. We estimate that the first interferometers will see 2 - 3 events per year from black hole binaries with component masses of 10 - 15M\\odot, with a signal-to-noise ratio of about 3, in a network of detectors consisting of GEO, VIRGO and two LIGOs. It appears that other possible sources, including coalescing neutron stars, are unlikely to be detected by the early instruments. We also argue that relic gravitational waves may be discovered by space-based interferometers in the frequency interval 2 × 10-3 - 10-2 Hz, at a signal-to-noise ratio level of about 3.

  19. Mapping strengths into virtues: the relation of the 24 VIA-strengths to six ubiquitous virtues

    PubMed Central

    Ruch, Willibald; Proyer, René T.

    2015-01-01

    The Values-in-Action-classification distinguishes six core virtues and 24 strengths. As the assignment of the strengths to the virtues was done on theoretical grounds it still needs empirical verification. As an alternative to factor analytic investigations the present study utilizes expert judgments. In a pilot study the conceptual overlap among five sources of knowledge (strength’s name including synonyms, short definitions, brief descriptions, longer theoretical elaborations, and item content) about a particular strength was examined. The results show that the five sources converged quite well, with the short definitions and the items being slightly different from the other. All strengths exceeded a cut-off value but the convergence was much better for some strengths (e.g., zest) than for others (e.g., perspective). In the main study 70 experts (from psychology, philosophy, theology, etc.) and 41 laypersons rated how prototypical the strengths are for each of the six virtues. The results showed that 10 were very good markers for their virtues, nine were good markers, four were acceptable markers, and only one strength failed to reach the cut-off score for its assigned virtue. However, strengths were often markers for two or even three virtues, and occasionally they marked the other virtue more strongly than the one they were assigned to. The virtue prototypicality ratings were slightly positively correlated with higher coefficients being found for justice and humanity. A factor analysis of the 24 strengths across the ratings yielded the six factors with an only slightly different composition of strengths and double loadings. It is proposed to adjust either the classification (by reassigning strengths and by allowing strengths to be subsumed under more than one virtue) or to change the definition of certain strengths so that they only exemplify one virtue. The results are discussed in the context of factor analytic attempts to verify the structural model. PMID:25954222

  20. Comparison of Three Information Sources for Smoking Information in Electronic Health Records

    PubMed Central

    Wang, Liwei; Ruan, Xiaoyang; Yang, Ping; Liu, Hongfang

    2016-01-01

    OBJECTIVE The primary aim was to compare independent and joint performance of retrieving smoking status through different sources, including narrative text processed by natural language processing (NLP), patient-provided information (PPI), and diagnosis codes (ie, International Classification of Diseases, Ninth Revision [ICD-9]). We also compared the performance of retrieving smoking strength information (ie, heavy/light smoker) from narrative text and PPI. MATERIALS AND METHODS Our study leveraged an existing lung cancer cohort for smoking status, amount, and strength information, which was manually chart-reviewed. On the NLP side, smoking-related electronic medical record (EMR) data were retrieved first. A pattern-based smoking information extraction module was then implemented to extract smoking-related information. After that, heuristic rules were used to obtain smoking status-related information. Smoking information was also obtained from structured data sources based on diagnosis codes and PPI. Sensitivity, specificity, and accuracy were measured using patients with coverage (ie, the proportion of patients whose smoking status/strength can be effectively determined). RESULTS NLP alone has the best overall performance for smoking status extraction (patient coverage: 0.88; sensitivity: 0.97; specificity: 0.70; accuracy: 0.88); combining PPI with NLP further improved patient coverage to 0.96. ICD-9 does not provide additional improvement to NLP and its combination with PPI. For smoking strength, combining NLP with PPI has slight improvement over NLP alone. CONCLUSION These findings suggest that narrative text could serve as a more reliable and comprehensive source for obtaining smoking-related information than structured data sources. PPI, the readily available structured data, could be used as a complementary source for more comprehensive patient coverage. PMID:27980387

  1. Catalog of Residential Depth-Damage Functions Used by the Army Corps of Engineers in Flood Damage Estimation

    DTIC Science & Technology

    1992-05-01

    regression analysis. The strength of any one variable can be estimated along with the strength of the entire model in explaining the variance of percent... applicable a set of damage functions is to a particular situation. Sometimes depth- damage functions are embedded in computer programs which calculate...functions. Chapter Six concludes with recommended policies on the development and application of depth-damage functions. 5 6 CHAPTER TWO CONSTRUCTION OF

  2. Structural design parameters of current WSDOT mixtures.

    DOT National Transportation Integrated Search

    2013-06-01

    The AASHTO LRFD, as well as other design manuals, has specifications that estimate the structural performance of a concrete mixture with regard to compressive strength, tensile strength, and deformation-related properties such as the modulus of elast...

  3. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  4. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  5. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  6. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  7. Probabilistic simulation of uncertainties in composite uniaxial strengths

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Stock, T. A.

    1990-01-01

    Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.

  8. Empirical and Theoretical Aspects of Generation and Transfer of Information in a Neuromagnetic Source Network

    PubMed Central

    Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal

    2011-01-01

    Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968

  9. Predicting bending strength of fire-retardant-treated plywood from screw-withdrawal tests

    Treesearch

    J. E. Winandy; P. K. Lebow; W. Nelson

    This report describes the development of a test method and predictive model to estimate the residual bending strength of fire-retardant-treated plywood roof sheathing from measurement of screw-withdrawal force. The preferred test methodology is described in detail. Models were developed to predict loss in mean and lower prediction bounds for plywood bending strength as...

  10. Applicability of geomechanical classifications for estimation of strength properties in Brazilian rock masses.

    PubMed

    Santos, Tatiana B; Lana, Milene S; Santos, Allan E M; Silveira, Larissa R C

    2017-01-01

    Many authors have been proposed several correlation equations between geomechanical classifications and strength parameters. However, these correlation equations have been based in rock masses with different characteristics when compared to Brazilian rock masses. This paper aims to study the applicability of the geomechanical classifications to obtain strength parameters of three Brazilian rock masses. Four classification systems have been used; the Rock Mass Rating (RMR), the Rock Mass Quality (Q), the Geological Strength Index (GSI) and the Rock Mass Index (RMi). A strong rock mass and two soft rock masses with different degrees of weathering located in the cities of Ouro Preto and Mariana, Brazil; were selected for the study. Correlation equations were used to estimate the strength properties of these rock masses. However, such correlations do not always provide compatible results with the rock mass behavior. For the calibration of the strength values obtained through the use of classification systems, ​​stability analyses of failures in these rock masses have been done. After calibration of these parameters, the applicability of the various correlation equations found in the literature have been discussed. According to the results presented in this paper, some of these equations are not suitable for the studied rock masses.

  11. Artificial Neural Network-Based Early-Age Concrete Strength Monitoring Using Dynamic Response Signals.

    PubMed

    Kim, Junkyeong; Lee, Chaggil; Park, Seunghee

    2017-06-07

    Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process.

  12. Artificial Neural Network-Based Early-Age Concrete Strength Monitoring Using Dynamic Response Signals

    PubMed Central

    Kim, Junkyeong; Lee, Chaggil; Park, Seunghee

    2017-01-01

    Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process. PMID:28590456

  13. Broad NE 8 lambda 774 emission from quasars in the HST-FOS snapshot survey (ABSNAP)

    NASA Technical Reports Server (NTRS)

    Hamann, Fred; Zuo, Lin; Tytler, David

    1995-01-01

    We discuss the strength and frequency of broad Ne VIII lambda 774 emission from quasars measured in the Hubble Space Telescope Faint Object Spectrograph (HST-FOS) snapshot survey (Absnap). Five sources in the survey have suitable redshifts (0.86 less than or equal to Z(sub em) less than or equal to 1.31), signal-to-noise ratios and no Lyman limit absorptions. Three of the five sources have a strong broad emission line near 774 A (rest), and the remaining two sources have a less securely measured line near this wavelength. We identify these lines with Ne VIII lambda 774 based on the measured wavelengths and theoretical estimates of various line fluxes (Hamann et al. 1995a). Secure Ne VIII detections occur in both radio-loud and radio-quiet sources. We tentatively conclude that broad Ne VIII lambda 774 emission is common in quasars, with typical strengths between approximately 25% and approximately 200% of O VI lambda 1034. These Ne VIII lambda 774 measurements imply that the broad emission line regions have a much hotter and more highly ionized component than previously recognized. They also suggest that quasar continua have substantial ionizing flux out to energies greater than 207 eV (greater than 15.2 ryd, lambda less than 60 A). Photoionization calculations using standard incident spectra indicate that the Ne VIII emission requires ionization parameters U greater than or = 5, total column densities N(sub H) greater than or = 10(sub 22)/sq cm and covering factors greater than or = 25%. The temperatures could be as high as approximately 10(exp 5) K. If the gas is instead collisionally ionized, strong Ne VIII would imply equilibrium temperatures in the range approximately 400,000 less than or approximately = T(sub e) less than or approximately = 10(exp 6) K. In either case, the highly ionized Ne VIII emission regions would appear as X-ray 'warm absorbers' if they lie along our line of sight to the X-ray continuum source.

  14. Investigating Primary Source Literacy

    ERIC Educational Resources Information Center

    Archer, Joanne; Hanlon, Ann M.; Levine, Jennie A.

    2009-01-01

    Primary source research requires students to acquire specialized research skills. This paper presents results from a user study testing the effectiveness of a Web guide designed to convey the concepts behind "primary source literacy". The study also evaluated students' strengths and weaknesses when conducting primary source research. (Contains 3…

  15. Initial Assessment of Acoustic Source Visibility with a 24-Element Microphone Array in the Arnold Engineering Development Center 80- by 120-Foot Wind Tunnel at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Horne, William C.

    2011-01-01

    Measurements of background noise were recently obtained with a 24-element phased microphone array in the test section of the Arnold Engineering Development Center 80- by120-Foot Wind Tunnel at speeds of 50 to 100 knots (27.5 to 51.4 m/s). The array was mounted in an aerodynamic fairing positioned with array center 1.2m from the floor and 16 m from the tunnel centerline, The array plate was mounted flush with the fairing surface as well as recessed in. (1.27 cm) behind a porous Kevlar screen. Wind-off speaker measurements were also acquired every 15 on a 10 m semicircular arc to assess directional resolution of the array with various processing algorithms, and to estimate minimum detectable source strengths for future wind tunnel aeroacoustic studies. The dominant background noise of the facility is from the six drive fans downstream of the test section and first set of turning vanes. Directional array response and processing methods such as background-noise cross-spectral-matrix subtraction suggest that sources 10-15 dB weaker than the background can be detected.

  16. The nature of the companion star in Circinus X-1

    NASA Astrophysics Data System (ADS)

    Johnston, Helen M.; Soria, Roberto; Gibson, Joel

    2016-02-01

    We present optical spectra and images of the X-ray binary Circinus X-1. The optical light curve of Cir X-1 is strongly variable, changing in brightness by 1.2 mag in the space of four days. The shape of the light curve is consistent with that seen in the 1980s, when the X-ray and radio counterparts of the source were at least ten times as bright as they are currently. We detect strong, variable H α emission lines, consisting of multiple components which vary with orbital phase. We estimate the extinction to the source from the strength of the diffuse interstellar bands and the Balmer decrement; the two methods give AV = 7.6 ± 0.6 mag and AV > 9.1 mag, respectively. The optical light curve can be modelled as arising from irradiation of the companion star by the central X-ray source, where a low-temperature star fills its Roche lobe in an orbit of moderate eccentricity (e ˜ 0.4). We suggest that the companion star is overluminous and underdense, due to the impact of the supernova which occurred less than 5000 yr ago.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, A.; Harnisch, J.; Borchers, R.

    Previous investigations reported on the volcanic production of halocarbons including chlorofluorocarbons (CFCs). It has been suggested that this natural source could account for a significant atmospheric CFC background concentration, but no quantitative assessment of its source strength has yet been presented. The synthetic mechanism for their volcanic formation has neither been clarified. Fumarole and lava gas samples from four volcanoes (Kuju, Satsuma Iwojima, Mt. Etna, Vulcano) have been studied using gas chromatography/ion trap-mass spectrometry. More than 300 organic substances were detected, among which 5 fluorinated, 100 chlorinated, 25 brominated, and 4 iodinated compounds have been identified. The most abundant organohalogenmore » species were chlorinated methanes, unsaturated C{sub 2}-chlorohydrocarbons, and chlorobenzene, suggesting a synthetic course that includes the thermolytic formation of acetylene from hydrothermal methane, condensation reactions, and synchronous catalytic halogenation in the presence of highly activated surfaces of cooling magma or juvenile ash. The only CFC compound found was CFCl{sub 3} (CFC-11), which was detected in some samples at concentrations of up to 1 ppbv. A conservative estimate of the upper limit of global CFC emissions by volcanoes clearly shows that this source is negligible as compared to the atmospheric burden by anthropogenic activities.« less

  18. Contaminant levels, source strengths, and ventilation rates in California retail stores.

    PubMed

    Chan, W R; Cohn, S; Sidheswaran, M; Sullivan, D P; Fisk, W J

    2015-08-01

    This field study measured ventilation rates and indoor air quality in 21 visits to retail stores in California. Three types of stores, such as grocery, furniture/hardware stores, and apparel, were sampled. Ventilation rates measured using a tracer gas decay method exceeded the minimum requirement of California's Title 24 Standard in all but one store. Concentrations of volatile organic compounds (VOCs), ozone, and carbon dioxide measured indoors and outdoors were analyzed. Even though there was adequate ventilation according to standard, concentrations of formaldehyde and acetaldehyde exceeded the most stringent chronic health guidelines in many of the sampled stores. The whole-building emission rates of VOCs were estimated from the measured ventilation rates and the concentrations measured indoor and outdoor. Estimated formaldehyde emission rates suggest that retail stores would need to ventilate at levels far exceeding the current Title 24 requirement to lower indoor concentrations below California's stringent formaldehyde reference level. Given the high costs of providing ventilation, effective source control is an attractive alternative. Field measurements suggest that California retail stores were well ventilated relative to the minimum ventilation rate requirement specified in the Building Energy Efficiency Standards Title 24. Concentrations of formaldehyde found in retail stores were low relative to levels found in homes but exceeded the most stringent chronic health guideline. Looking ahead, California is mandating zero energy commercial buildings by 2030. To reduce the energy use from building ventilation while maintaining or even lowering formaldehyde in retail stores, effective formaldehyde source control measures are vitally important. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  19. A comparison of top-down and bottom-up carbon dioxide fluxes in the UK using a multi-platform measurement network.

    NASA Astrophysics Data System (ADS)

    White, Emily; Rigby, Matt; O'Doherty, Simon; Stavert, Ann; Lunt, Mark; Nemitz, Eiko; Helfter, Carole; Allen, Grant; Pitt, Joe; Bauguitte, Stéphane; Levy, Pete; van Oijen, Marcel; Williams, Mat; Smallman, Luke; Palmer, Paul

    2016-04-01

    Having a comprehensive understanding, on a countrywide scale, of both biogenic and anthropogenic CO2 emissions is essential for knowing how best to reduce anthropogenic emissions and for understanding how the terrestrial biosphere is responding to global fossil fuel emissions. Whilst anthropogenic CO2 flux estimates are fairly well constrained, fluxes from biogenic sources are not. This work will help to verify existing anthropogenic emissions inventories and give a better understanding of biosphere - atmosphere CO2 exchange. Using an innovative top-down inversion scheme; a hierarchical Bayesian Markov Chain Monte Carlo approach with reversible jump "trans-dimensional" basis function selection, we aim to find emissions estimates for biogenic and anthropogenic sources simultaneously. Our approach allows flux uncertainties to be derived more comprehensively than previous methods, and allows the resolved spatial scales in the solution to be determined using the data. We use atmospheric CO2 mole fraction data from the UK Deriving Emissions related to Climate Change (DECC) and Greenhouse gAs UK and Global Emissions (GAUGE) projects. The network comprises of 6 tall tower sites, flight campaigns and a ferry transect along the east coast, and enables us to derive high-resolution monthly flux estimates across the UK and Ireland for the period 2013-2015. We have derived UK total fluxes of 675 PIC 78 Tg/yr during January 2014 (seasonal maximum) and 23 PIC 96 Tg/yr during May 2014 (seasonal minimum). Our disaggregated anthropogenic and biogenic flux estimates are compared to a new high-resolution time resolved anthropogenic inventory that will underpin future UNFCCC reports by the UK, and to DALEC carbon cycle model. This allows us to identify where significant differences exist between these "bottom-up" and "top-down" flux estimates and suggest reasons for discrepancies. We will highlight the strengths and limitations of the UK's CO2 emissions verification infrastructure at present and outline improvements that could be made in the future.

  20. Shear strength of clay and silt embankments.

    DOT National Transportation Integrated Search

    2009-09-01

    Highway embankment is one of the most common large-scale geotechnical facilities constructed in Ohio. In the past, the design of these embankments was largely based on soil shear strength properties that had been estimated from previously published e...

  1. Characterization of Pu-238 heat source granule containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson Ii, P D; Thronas, D L; Romero, J P

    2008-01-01

    The Milliwatt Radioisotopic Thermoelectric Generator (RTG) provides power for permissive-action links. These nuclear batteries convert thermal energy to electrical energy using a doped silicon-germanium thermopile. The thermal energy is provided by a heat source made of {sup 238}Pu, in the form of {sup 238}PuO{sub 2} granules. The granules are contained in 3 layers of encapsulation. A thin T-111 liner surrounds the {sup 238}PuO{sub 2} granules and protects the second layer (strength member) from exposure to the fuel granules. The T-111 strength member contains the fuel under impact condition. An outer clad of Hastelloy-C protects the T-111 from oxygen embrittlement. Themore » T-111 strength member is considered the critical component in this {sup 238}PuO{sub 2} containment system. Any compromise in the strength member is something that needs to be characterized. Consequently, the T-111 strength member is characterized upon it's decommissioning through Scanning Electron Microscopy (SEM), and Metallography. SEM is used in Secondary Electron mode to reveal possible grain boundary deformation and/or cracking in the region of the strength member weld. Deformation and cracking uncovered by SEM are further characterized by Metallography. Metallography sections are mounted and polished, observed using optical microscopy, then documented in the form of photomicrographs. SEM may further be used to examine polished Metallography mounts to characterize elements using the SEM mode of Energy Dispersive X-ray Spectroscopy (EDS). This paper describes the characterization of the metallurgical condition of decommissioned RTG heat sources.« less

  2. Lithospheric strength of Ganymede: Clues to early thermal profiles from extensional tectonic features

    NASA Technical Reports Server (NTRS)

    Golombek, M. P.; Banerdt, W. B.

    1985-01-01

    While it is generally agreed that the strength of a planet's lithosphere is controlled by a combination of brittle sliding and ductile flow laws, predicting the geometry and initial characteristics of faults due to failure from stresses imposed on the lithospheric strength envelope has not been thoroughly explored. Researchers used lithospheric strength envelopes to analyze the extensional features found on Ganymede. This application provides a quantitative means of estimating early thermal profiles on Ganymede, thereby constraining its early thermal evolution.

  3. The magnetic field in the disk of our Galaxy

    NASA Astrophysics Data System (ADS)

    Han, J. L.; Qiao, G. J.

    1994-08-01

    The magnetic field in the disk of our Galaxy is investigated by using the Rotation Measures (RMs) of pulsars and Extragalactic Radio Sources (ERSes). Through analyses of the RMs of carefully selected pulsar samples, it is found that the Galaxy has a global field of BiSymmetric Spiral (BSS) configuration, rather than a concentric ring or an AxiSymmetric Spiral (ASS) configuration. The Galactic magnetic field of BSS structure is supposed to be of primordial origin. The pitch angle of the BSS structure is -8.2deg+/-0.5deg. The field geometry shows that the field goes along the Carina-Sagittarius arm, which is delineated by Giant Molecular Clouds (GMCs). The amplitude of the BSS field is 1.8+/-0.3μG. The first field strength maximum is at r_0_=11.9+/-0.15 kpc in the direction of l=180deg. The field is strong in the interarm regions and it reverses in the arm regions. In the vicinity of the Sun, it has a strength of ~1.4μG and reverses at 0.2-0.3kpc in the direction of l=0deg. Because of the unknown electron distribution of the Galaxy and other difficulties, it is impossible to derive the galactic field from the RMs of ERSes very quantitatively. Nevertheless, the RMs of ERSes located in the region of the two galactic poles are used to estimate the vertical component of the local galactic field, which is found to have a strength of 0.2-0.3μG and is directed from the south galactic pole to the north galactic pole. The scale height of the magnetic disk of the Galaxy is estimated from the RMs of all-sky distributed ERSes, being about 1.2+/-0.4pc. The regular magnetic field of our Galaxy, which is probably similar to that of M81, extends far from the optical disk.

  4. Modelling urban δ13C variations in the Greater Toronto Area

    NASA Astrophysics Data System (ADS)

    Pugliese, S.; Vogel, F. R.; Murphy, J. G.; Worthy, D. E. J.; Zhang, J.; Zheng, Q.; Moran, M. D.

    2015-12-01

    Even in urbanized regions, carbon dioxide (CO2) emissions are derived from a variety of biogenic and anthropogenic sources and are influenced by atmospheric transport across borders. As policies are introduced to reduce the emission of CO2, there is a need for independent verification of emissions reporting. In this work, we aim to use carbon isotope (13CO2 and 12CO2) simulations in combination with atmospheric measurements to distinguish between CO2 sources in the Greater Toronto Area (GTA), Canada. This is being done by developing an urban δ13C framework based on existing CO2 emission data and forward modelling using a chemistry transport model, CHIMERE. The framework is designed to use region specific δ13C signatures of the dominant CO2 sources together with a CO2 inventory at a fine spatial and temporal resolution; the product is compared against highly accurate 13CO2 and 12CO2 ambient data. The strength of this framework is its potential to estimate both locally produced and regionally transported CO­2. Locally, anthropogenic CO­2 in urban areas is often derived from natural gas combustion (for heating) and gasoline/diesel combustion (for transportation); the isotopic signatures of these processes are significantly different (approximately d13CVPDB = -40 ‰ and -26 ‰ respectively) and can be used to infer their relative contributions. Furthermore, the contribution of transported CO2 can also be estimated as nearby regions often rely on other sources of heating (e.g. coal combustion), which has a very different signature (approximately d13CVPDB = -23 ‰). We present an analysis of the GTA in contrast to Paris, France where atmospheric observations are also available and 13CO2 has been studied. Utilizing our δ13C framework and differences in sectoral isotopic signatures, we quantify the relative contribution of CO2 sources on the overall measured concentration and assess the ability of this framework as a tool for tracing the evolution of sector-specific emissions.

  5. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  6. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  7. Utilising social media contents for flood inundation mapping

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Dransch, Doris; Fohringer, Joachim; Kreibich, Heidi

    2016-04-01

    Data about the hazard and its consequences are scarce and not readily available during and shortly after a disaster. An information source which should be explored in a more efficient way is eyewitness accounts via social media. This research presents a methodology that leverages social media content to support rapid inundation mapping, including inundation extent and water depth in the case of floods. It uses quantitative data that are estimated from photos extracted from social media posts and their integration with established data. Due to the rapid availability of these posts compared to traditional data sources such as remote sensing data, areas affected by a flood, for example, can be determined quickly. Key challenges are to filter the large number of posts to a manageable amount of potentially useful inundation-related information, and to interpret and integrate the posts into mapping procedures in a timely manner. We present a methodology and a tool ("PostDistiller") to filter geo-located posts from social media services which include links to photos and to further explore this spatial distributed contextualized in situ information for inundation mapping. The June 2013 flood in Dresden is used as an application case study in which we evaluate the utilization of this approach and compare the resulting spatial flood patterns and inundation depths to 'traditional' data sources and mapping approaches like water level observations and remote sensing flood masks. The outcomes of the application case are encouraging. Strengths of the proposed procedure are that information for the estimation of inundation depth is rapidly available, particularly in urban areas where it is of high interest and of great value because alternative information sources like remote sensing data analysis do not perform very well. The uncertainty of derived inundation depth data and the uncontrollable availability of the information sources are major threats to the utility of the approach.

  8. An InSAR survey of the central Andes: Constraints on magma chamber geometry and mass balance in a volcanic arc

    NASA Astrophysics Data System (ADS)

    Pritchard, M. E.; Simons, M.

    2002-12-01

    The central Andes (14-28o S) has a high density of volcanoes, but a sparse human population, such that the activity of most volcanoes is poorly constrained. We use InSAR to conduct the first systematic observations of deformation at nearly 900 volcanoes (about 50 of which are classified ``potentially active'') during the 1992-2002 time interval. We find volcanic deformation in four locations. Subsidence is seen at Robledo (or Cerro Blanco) caldera, Argentina. We observe inflation at the stratovolcano Uturuncu, Bolivia, near stratovolcano Hualca Hualca, Peru, and in a region not associated with any known edifice on the border between Chile and Argentina that we call ``Lazufre'' because it lies between volcanoes Lastarria and Cordon del Azufre. The deformation pattern can be well explained by a uniform point-source source of inflation or deflation, but we compare these model results with those from a tri-axial point-source ellipsoid to test the robustness of estimated source depth and source strength (inferred here to be volume change). We further explore the sensitivity of these parameters to elastic half-space and layered-space models of crustal structure, and the influence of local topography. Because only one satellite look direction is available for most time periods, a variety of models are consistent with our observations. If we assume that inflation is due solely to magmatic intrusion, we can compare the rate of magma intrusion to volcanic extrusion during the decade for which data is available and the longer-term geologic rate. For the last decade, the ratio of volume intruded to extruded is between about 1-10, which agrees with previous geologic estimates in this and other volcanic arcs. The combined rate of intrusion and extrusion is within an order of magnitude of the inferred geologic rate.

  9. Estimation of a coronal mass ejection magnetic field strength using radio observations of gyrosynchrotron radiation

    NASA Astrophysics Data System (ADS)

    Carley, Eoin P.; Vilmer, Nicole; Simões, Paulo J. A.; Ó Fearraigh, Brían

    2017-12-01

    Coronal mass ejections (CMEs) are large eruptions of plasma and magnetic field from the low solar corona into interplanetary space. These eruptions are often associated with the acceleration of energetic electrons which produce various sources of high intensity plasma emission. In relatively rare cases, the energetic electrons may also produce gyrosynchrotron emission from within the CME itself, allowing for a diagnostic of the CME magnetic field strength. Such a magnetic field diagnostic is important for evaluating the total magnetic energy content of the CME, which is ultimately what drives the eruption. Here, we report on an unusually large source of gyrosynchrotron radiation in the form of a type IV radio burst associated with a CME occurring on 2014-September-01, observed using instrumentation from the Nançay Radio Astronomy Facility. A combination of spectral flux density measurements from the Nançay instruments and the Radio Solar Telescope Network (RSTN) from 300 MHz to 5 GHz reveals a gyrosynchrotron spectrum with a peak flux density at 1 GHz. Using this radio analysis, a model for gyrosynchrotron radiation, a non-thermal electron density diagnostic using the Fermi Gamma Ray Burst Monitor (GBM) and images of the eruption from the GOES Soft X-ray Imager (SXI), we were able to calculate both the magnetic field strength and the properties of the X-ray and radio emitting energetic electrons within the CME. We find the radio emission is produced by non-thermal electrons of energies >1 MeV with a spectral index of δ 3 in a CME magnetic field of 4.4 G at a height of 1.3 R⊙, while the X-ray emission is produced from a similar distribution of electrons but with much lower energies on the order of 10 keV. We conclude by comparing the electron distribution characteristics derived from both X-ray and radio and show how such an analysis can be used to define the plasma and bulk properties of a CME.

  10. The Association Between Sleep Duration and Hand Grip Strength in Community-Dwelling Older Adults: The Yilan Study, Taiwan.

    PubMed

    Chen, Hsi-Chung; Hsu, Nai-Wei; Chou, Pesus

    2017-04-01

    Different pathomechanisms may underlie the age-related decline in muscle mass and muscle power in older adults. This study aimed to examine the independent relationship between sleep duration and muscle power. Older adults, aged 65 years and older, were randomly selected to participate in a community-based survey in Yilan city, Taiwan. Data on self-reported sleep duration, sociodemographic information, lifestyle, chronic medical and mental health conditions, sleep-related parameters, and anthropometric measurements were collected. Participants who slept ≤4 hr, 5 hr, 6-7 hr, 8 hr, and ≥9 hr were defined as shortest, short, mid-range, long, and longest sleepers, respectively. Muscle power was estimated using hand grip strength. A total of 1081 individuals participated. Their average age was 76.3 ± 6.1 years, and 59.4% were female. After controlling for covariates, including muscle mass of the upper extremities, both long (estimated mean [95% confidence interval, CI]: 19.2 [18.2-20.2], p = .03) and longest sleepers (estimated mean [95% CI]: 17.8 [16.4-19.2], p = .001) had weaker hand grip strength than mid-range sleepers (estimated mean [95% CI]: 20.9 [20.3-21.4]). When stratified by sex, the association between longest sleep duration and weaker hand grip strength was noted among men only. Older adults with long sleep duration had weaker hand grip strength irrespective of muscle mass. This finding suggests that decreased muscle power may mediate or confound the relationship between long sleep duration and adverse health outcomes. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  11. Shear and Turbulence Estimates for Calculation of Wind Turbine Loads and Responses Under Hurricane Strength Winds

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Bryan, G. H.; Haupt, S. E.

    2012-12-01

    Schwartz et al. (2010) recently reported that the total gross energy-generating offshore wind resource in the United States in waters less than 30m deep is approximately 1000 GW. Estimated offshore generating capacity is thus equivalent to the current generating capacity in the United States. Offshore wind power can therefore play important role in electricity production in the United States. However, most of this resource is located along the East Coast of the United States and in the Gulf of Mexico, areas frequently affected by tropical cyclones including hurricanes. Hurricane strength winds, associated shear and turbulence can affect performance and structural integrity of wind turbines. In a recent study Rose et al. (2012) attempted to estimate the risk to offshore wind turbines from hurricane strength winds over a lifetime of a wind farm (i.e. 20 years). According to Rose et al. turbine tower buckling has been observed in typhoons. They concluded that there is "substantial risk that Category 3 and higher hurricanes can destroy half or more of the turbines at some locations." More robust designs including appropriate controls can mitigate the risk of wind turbine damage. To develop such designs good estimates of turbine loads under hurricane strength winds are essential. We use output from a large-eddy simulation of a hurricane to estimate shear and turbulence intensity over first couple of hundred meters above sea surface. We compute power spectra of three velocity components at several distances from the eye of the hurricane. Based on these spectra analytical spectral forms are developed and included in TurbSim, a stochastic inflow turbulence code developed by the National Renewable Energy Laboratory (NREL, http://wind.nrel.gov/designcodes/preprocessors/turbsim/). TurbSim provides a numerical simulation including bursts of coherent turbulence associated with organized turbulent structures. It can generate realistic flow conditions that an operating turbine would encounter under hurricane strength winds. These flow fields can be used to estimate wind turbine loads and responses with AeroDyn (http://wind.nrel.gov/designcodes/simulators/aerodyn/) and FAST (http://wind.nrel.gov/designcodes/simulators/fast/) codes also developed by NREL.

  12. The Effect of Special Operations Training on Testosterone, Lean Body Mass, and Strength and the Potential for Therapeutic Testosterone Replacement: A Review of the Literature

    DTIC Science & Technology

    2016-07-01

    Effects of Testosterone or Anabolic Androgenic Steroid on Body Mass, Lean Body Mass, and Strength in Patients with Disease or Muscle Wasting...of Ranger training reportedly decreased body mass, fat mass, and lean body mass (LBM), with reductions in field measures of strength and power of...Table 3. Effects of Testosterone or Anabolic Androgenic Steroid with Resistance Training on Lean Body Mass and Strength Source Subjects Treatment

  13. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    PubMed

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.

  14. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  15. Characteristics of Dust Deposition at High Elevation Sites in Caucasus Over the Past 190 years Recorded in Ice Cores.

    NASA Astrophysics Data System (ADS)

    Kutuzov, Stanislav; Ginot, Patrick; Mikhaenko, Vladimir; Krupskaya, Victoria; Legrand, Michel; Preunkert, Suzanne; Polukhov, Alexey; Khairedinova, Alexandra

    2017-04-01

    The nature and extent of both radiative and geochemical impacts of mineral dust on snow pack and glaciers depend on physical and chemical properties of dust particles and its deposition rates. Ice cores can provide information about amount of dust particles in the atmosphere and its characteristic and also give insights on strengths of the dust sources and its changes in the past. A series of shallow ice cores have been obtained in Caucasus mountains, Russia in 2004 - 2015. A 182 meter ice core has been recovered at the Western Plateau of Mt. Elbrus (5115 m a.s.l.) in 2009. The ice cores have been dated using stable isotopes, NH4+ and succinic acid data with the seasonal resolution. Samples were analysed for chemistry, concentrations of dust and black carbon, and particle size distributions. Dust mineralogy was assessed by XRD. Individual dust particles were analysed using SEM. Dust particle number concentration was measured using the Markus Klotz GmbH (Abakus) implemented into the CFA system. Abakus data were calibrated with Coulter Counter multisizer 4. Back trajectory cluster analysis was used to assess main dust source areas. It was shown that Caucasus region experiencing influx of mineral dust from the Sahara and deserts of the Middle East. Mineralogy of dust particles of desert origin was significantly different from the local debris material and contained large proportion of calcite and clay minerals (kaolinite, illite, palygorskite) associated with material of desert origin. Annual dust flux in the Caucasus Mountains was estimated as 300 µg/cm2 a-1. Particle size distribution depends on individual characteristics of dust deposition event and also on the elevation of the drilling site. The contribution of desert dust deposition was estimated as 35-40 % of the total dust flux. Average annual Ca2+ concentration over the period from 1824 to 2013 was of 150 ppb while some of the strong dust deposition events led to the Ca2+ concentrations reaching 4400 ppb. An increase of dust and Ca2+ concentration was registered since the beginning of XX century. The ice core record depicts also a prominent increase of dust concentration in 1980's which may be related to the increase of dust sources strength in North Africa.

  16. Body Estimation and Physical Performance: Estimation of Lifting and Carrying from Fat-Free Mass.

    DTIC Science & Technology

    1998-10-30

    demanding Navy jobs is associat- ed with greater rates of low back injuries (Vickers, Hervig and White, 1997). Vickers (personal commu- nication) unpublished...adequate strength to reduce the risk of injury on the job to levels of less demanding jobs. The rate of injury on the job might be reduced if strength...of fatness. Individuals for whom body weight is elevated due to the presence of a large muscle mass (e.g. weightlifters ), do not have the same health

  17. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  18. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    NASA Astrophysics Data System (ADS)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  19. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  20. A New Stochastic Approach to Predict Peak and Residual Shear Strength of Natural Rock Discontinuities

    NASA Astrophysics Data System (ADS)

    Casagrande, D.; Buzzi, O.; Giacomini, A.; Lambert, C.; Fenton, G.

    2018-01-01

    Natural discontinuities are known to play a key role in the stability of rock masses. However, it is a non-trivial task to estimate the shear strength of large discontinuities. Because of the inherent complexity to access to the full surface of the large in situ discontinuities, researchers or engineers tend to work on small-scale specimens. As a consequence, the results are often plagued by the well-known scale effect. A new approach is here proposed to predict shear strength of discontinuities. This approach has the potential to avoid the scale effect. The rationale of the approach is as follows: a major parameter that governs the shear strength of a discontinuity within a rock mass is roughness, which can be accounted for by surveying the discontinuity surface. However, this is typically not possible for discontinuities contained within the rock mass where only traces are visible. For natural surfaces, it can be assumed that traces are, to some extent, representative of the surface. It is here proposed to use the available 2D information (from a visible trace, referred to as a seed trace) and a random field model to create a large number of synthetic surfaces (3D data sets). The shear strength of each synthetic surface can then be estimated using a semi-analytical model. By using a large number of synthetic surfaces and a Monte Carlo strategy, a meaningful shear strength distribution can be obtained. This paper presents the validation of the semi-analytical mechanistic model required to support the new approach for prediction of discontinuity shear strength. The model can predict both peak and residual shear strength. The second part of the paper lays the foundation of a random field model to support the creation of synthetic surfaces having statistical properties in line with those of the data of the seed trace. The paper concludes that it is possible to obtain a reasonable estimate of peak and residual shear strength of the discontinuities tested from the information from a single trace, without having access to the whole surface.

  1. Strength and Deformability of Light-toned Layered Deposits Observed by MER Opportunity: Eagle to Erebus Craters

    NASA Astrophysics Data System (ADS)

    Okubo, C. H.; Schultz, R. A.; Nahm, A. L.

    2007-07-01

    The strength and deformability of light-toned layered deposits are estimated based on measurements of porosity from Microscopic Imager data acquired by MER Opportunity during its traverse from Eagle Crater to Erebus Crater.

  2. Use of surrogate technologies to estimate suspended sediment in the Clearwater River, Idaho, and Snake River, Washington, 2008-10

    USGS Publications Warehouse

    Wood, Molly S.; Teasdale, Gregg N.

    2013-01-01

    Elevated levels of fluvial sediment can reduce the biological productivity of aquatic systems, impair freshwater quality, decrease reservoir storage capacity, and decrease the capacity of hydraulic structures. The need to measure fluvial sediment has led to the development of sediment surrogate technologies, particularly in locations where streamflow alone is not a good estimator of sediment load because of regulated flow, load hysteresis, episodic sediment sources, and non-equilibrium sediment transport. An effective surrogate technology is low maintenance and sturdy over a range of hydrologic conditions, and measured variables can be modeled to estimate suspended-sediment concentration (SSC), load, and duration of elevated levels on a real-time basis. Among the most promising techniques is the measurement of acoustic backscatter strength using acoustic Doppler velocity meters (ADVMs) deployed in rivers. The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Walla Walla District, evaluated the use of acoustic backscatter, turbidity, laser diffraction, and streamflow as surrogates for estimating real-time SSC and loads in the Clearwater and Snake Rivers, which adjoin in Lewiston, Idaho, and flow into Lower Granite Reservoir. The study was conducted from May 2008 to September 2010 and is part of the U.S. Army Corps of Engineers Lower Snake River Programmatic Sediment Management Plan to identify and manage sediment sources in basins draining into lower Snake River reservoirs. Commercially available acoustic instruments have shown great promise in sediment surrogate studies because they require little maintenance and measure profiles of the surrogate parameter across a sampling volume rather than at a single point. The strength of acoustic backscatter theoretically increases as more particles are suspended in the water to reflect the acoustic pulse emitted by the ADVM. ADVMs of different frequencies (0.5, 1.5, and 3 Megahertz) were tested to target various sediment grain sizes. Laser diffraction and turbidity also were tested as surrogate technologies. Models between SSC and surrogate variables were developed using ordinary least-squares regression. Acoustic backscatter using the high frequency ADVM at each site was the best predictor of sediment, explaining 93 and 92 percent of the variability in SSC and matching sediment sample data within +8.6 and +10 percent, on average, at the Clearwater River and Snake River study sites, respectively. Additional surrogate models were developed to estimate sand and fines fractions of suspended sediment based on acoustic backscatter. Acoustic backscatter generally appears to be a better estimator of suspended sediment concentration and load over short (storm event and monthly) and long (annual) time scales than transport curves derived solely from the regression of conventional sediment measurements and streamflow. Changing grain sizes, the presence of organic matter, and aggregation of sediments in the river likely introduce some variability in the model between acoustic backscatter and SSC.

  3. Comparison of Source Partitioning Methods for CO2 and H2O Fluxes Based on High Frequency Eddy Covariance Data

    NASA Astrophysics Data System (ADS)

    Klosterhalfen, Anne; Moene, Arnold; Schmidt, Marius; Ney, Patrizia; Graf, Alexander

    2017-04-01

    Source partitioning of eddy covariance (EC) measurements of CO2 into respiration and photosynthesis is routinely used for a better understanding of the exchange of greenhouse gases, especially between terrestrial ecosystems and the atmosphere. The most frequently used methods are usually based either on relations of fluxes to environmental drivers or on chamber measurements. However, they often depend strongly on assumptions or invasive measurements and do usually not offer partitioning estimates for latent heat fluxes into evaporation and transpiration. SCANLON and SAHU (2008) and SCANLON and KUSTAS (2010) proposed an promising method to estimate the contributions of transpiration and evaporation using measured high frequency time series of CO2 and H2O fluxes - no extra instrumentation necessary. This method (SK10 in the following) is based on the spatial separation and relative strength of sources and sinks of CO2 and water vapor among the sub-canopy and canopy. Assuming that air from those sources and sinks is not yet perfectly mixed before reaching EC sensors, partitioning is estimated based on the separate application of the flux-variance similarity theory to the stomatal and non-stomatal components of the regarded fluxes, as well as on additional assumptions on stomatal water use efficiency (WUE). The CO2 partitioning method after THOMAS et al. (2008) (TH08 in the following) also follows the argument that the dissimilarities of sources and sinks in and below a canopy affect the relation between H2O and CO2 fluctuations. Instead of involving assumptions on WUE, TH08 directly screens their scattergram for signals of joint respiration and evaporation events and applies a conditional sampling methodology. In spite of their different main targets (H2O vs. CO2), both methods can yield partitioning estimates on both fluxes. We therefore compare various sub-methods of SK10 and TH08 including own modifications (e.g., cluster analysis) to each other, to established source partitioning methods, and to chamber measurements at various agroecosystems. Further, profile measurements and a canopy-resolving Large Eddy Simulation model are used to test the assumptions involved in SK10. Scanlon, T.M., Kustas, W.P., 2010. Partitioning carbon dioxide and water vapor fluxes using correlation analysis. Agricultural and Forest Meteorology 150 (1), 89-99. Scanlon, T.M., Sahu, P., 2008. On the correlation structure of water vapor and carbon dioxide in the atmospheric surface layer: A basis for flux partitioning. Water Resources Research 44 (10), W10418, 15 pp. Thomas, C., Martin, J.G., Goeckede, M., Siqueira, M.B., Foken, T., Law, B.E., Loescher H.W., Katul, G., 2008. Estimating daytime subcanopy respiration from conditional sampling methods applied to multi-scalar high frequency turbulence time series. Agricultural and Forest Meteorology 148 (8-9), 1210-1229.

  4. Sensitivity of Global Methane Bayesian Inversion to Surface Observation Data Sets and Chemical-Transport Model Resolution

    NASA Astrophysics Data System (ADS)

    Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.

    2017-12-01

    Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby increasing the information at that site. These different inversions—event-level and interpolated data, higher and lower resolutions—are compared using an ensemble of descriptive and comparative statistics. Analyzing the sensitivity of the inverse model leads to more accurate estimates of the methane source category uncertainty.

  5. Improving volcanic ash predictions with the HYSPLIT dispersion model by assimilating MODIS satellite retrievals

    NASA Astrophysics Data System (ADS)

    Chai, Tianfeng; Crawford, Alice; Stunder, Barbara; Pavolonis, Michael J.; Draxler, Roland; Stein, Ariel

    2017-02-01

    Currently, the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) runs the HYSPLIT dispersion model with a unit mass release rate to predict the transport and dispersion of volcanic ash. The model predictions provide information for the Volcanic Ash Advisory Centers (VAAC) to issue advisories to meteorological watch offices, area control centers, flight information centers, and others. This research aims to provide quantitative forecasts of ash distributions generated by objectively and optimally estimating the volcanic ash source strengths, vertical distribution, and temporal variations using an observation-modeling inversion technique. In this top-down approach, a cost functional is defined to quantify the differences between the model predictions and the satellite measurements of column-integrated ash concentrations weighted by the model and observation uncertainties. Minimizing this cost functional by adjusting the sources provides the volcanic ash emission estimates. As an example, MODIS (Moderate Resolution Imaging Spectroradiometer) satellite retrievals of the 2008 Kasatochi volcanic ash clouds are used to test the HYSPLIT volcanic ash inverse system. Because the satellite retrievals include the ash cloud top height but not the bottom height, there are different model diagnostic choices for comparing the model results with the observed mass loadings. Three options are presented and tested. Although the emission estimates vary significantly with different options, the subsequent model predictions with the different release estimates all show decent skill when evaluated against the unassimilated satellite observations at later times. Among the three options, integrating over three model layers yields slightly better results than integrating from the surface up to the observed volcanic ash cloud top or using a single model layer. Inverse tests also show that including the ash-free region to constrain the model is not beneficial for the current case. In addition, extra constraints on the source terms can be given by explicitly enforcing no-ash for the atmosphere columns above or below the observed ash cloud top height. However, in this case such extra constraints are not helpful for the inverse modeling. It is also found that simultaneously assimilating observations at different times produces better hindcasts than only assimilating the most recent observations.

  6. Estimation of the solar Lyman alpha flux from ground based measurements of the Ca II K line

    NASA Technical Reports Server (NTRS)

    Rottman, G. J.; Livingston, W. C.; White, O. R.

    1990-01-01

    Measurements of the solar Lyman alpha and Ca II K from October 1981 to April 1989 show a strong correlation (r = 0.95) that allows estimation of the Lyman alpha flux at 1 AU from 1975 to December 1989. The estimated Lyman alpha strength of 3.9 x 10 to the 11th + or - 0.15 x 10 to the 11th photons/s sq cm on December 7, 1989 is at the same maximum levels seen in Cycle 21. Relative to other UV surrogates (sunspot number, 10.7 cm radio flux, and He I 10830 line strength), Lyman alpha estimates computed from the K line track the SME measurements well from solar maximum, through solar minimum, and into Cycle 22.

  7. Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG.

    PubMed

    Hauk, O; Keil, A; Elbert, T; Müller, M M

    2002-01-30

    We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.

  8. Simulated Martian pressure cycle based on the sublimation and deposition of polar CO2

    NASA Astrophysics Data System (ADS)

    Kemppinen, Osku; Paton, Mark; Savijärvi, Hannu; Harri, Ari-Matti

    2014-05-01

    The Martian atmospheric pressure cycle is driven by sublimation and deposition of CO2 at polar caps. In the thin atmosphere of Mars the surface energy balance and thus the phase changes of CO2 are dominated by radiation. Additionally, because the atmosphere is so thin, the annual polar cap cycle can have a large relative effect on the pressure. In this work we utilize radiative transfer models to calculate the amount of radiation incoming to Martian polar latitudes over each sol of the year, as well as the amount of energy lost from the surface due to thermal radiation. The energy budget calculated in this way allows us to estimate the amount of CO2 sublimating and depositing at each hour of the Martian year. Since virtually all of the sublimated CO2 is believed to enter and stay in the atmosphere until depositing, this estimate allows us to calculate the annual pressure cycle, assuming that the CO2 is distributed approximately evenly over the planet. The model is running with physically plausible parameters and producing encouragingly good fits to in situ measured data made by e.g. Viking landers. In the next phase we will validate the simulation runs against polar ice cap thickness measurements as well as compare the calculated CO2 source and sink strengths to the sources and sinks of global atmospheric models.

  9. Characterization of Wisconsin mixture low temperature properties for the AASHTO mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2011-12-01

    This research evaluated the low temperature creep compliance and tensile strength properties of Wisconsin mixtures. : Creep compliance and tensile strength data were collected for 16 Wisconsin mixtures representing commonly used : aggregate sources a...

  10. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.

  11. Quantification of CO2 and CH4 megacity emissions using portable solar absorption spectrometers

    NASA Astrophysics Data System (ADS)

    Frey, Matthias; Hase, Frank; Blumenstock, Thomas; Morino, Isamu; Shiomi, Kei

    2017-04-01

    Urban areas already contribute to over 50% of the global population, additionally the percentage of the worldwide population living in Metropolitan areas is continuously growing. Thus, a precise knowledge of urban greenhouse gas (GHG) emissions is of utmost importance. Whereas, however, GHG emissions on a nationwide to continental scale can be relatively precisely estimated using satellite observations (and fossil fuel consumption statistics), reliable estimations for local to regional scale emissions pose a bigger problem due to lack of timely and spatially high resolved satellite data and possible biases of passive spectroscopic nadir observations (e.g. enhanced aerosol scattering in a city plume). Furthermore, emission inventories on the city scale might be missing contributions (e.g. methane leakage from gas pipes). Here, newly developed mobile low resolution Fourier Transform spectrometers (Bruker EM27/SUN) are utilized to quantify small scale emissions. This novel technique was successfully tested before by KIT and partners during campaigns in Berlin, Paris and Colorado for detecting emissions from various sources. We present results from a campaign carried out in February - April 2016 in the Tokyo bay area, one of the biggest Metropolitan areas worldwide. We positioned two EM27/SUN spectrometers on the outer perimeter of Tokyo along the prevailing wind axis upwind and downwind of the city source. Before and after the campaign, calibration measurements were performed in Tsukuba with a collocated high resolution FTIR spectrometer from the Total Carbon Column Observing Network (TCCON). During the campaign the observed XCO2 and XCH4 values vary significantly. Additionally, intraday variations are observed at both sites. Furthermore, an enhancement due to the Tokyo area GHG emissions is clearly visible for both XCO2 and XCH4. The observed signals are significantly higher compared to prior campaigns targeting other major cities. We perform a rough estimate of the source strength. Finally, a comparison with an observation from the OCO-2 satellite is shown.

  12. Contribution of flexor pollicis longus to pinch strength: an in vivo study.

    PubMed

    Goetz, Thomas J; Costa, Joseph A; Slobogean, Gerard; Patel, Satyam; Mulpuri, Kishore; Travlos, Andrew

    2012-11-01

    To estimate the contribution of the flexor pollicis longus (FPL) to key pinch strength. Secondary outcomes include tip pinch, 3-point chuck pinch, and grip strength. Eleven healthy volunteers consented to participate in the study. We recorded baseline measures for key, 3-point chuck, and tip pinch and for grip strength. In order to control for instability of the interphalangeal (IP) joint after FPL paralysis, pinch measurements were repeated after immobilizing the thumb IP joint. Measures were repeated after subjects underwent electromyography-guided lidocaine blockade of the FPL muscle. Nerve conduction studies and clinical examinations were used to confirm FPL blockade and to rule out median nerve blockade. Paired t-tests were used to compare pre- and postblock means for both unsplinted and splinted measures. The difference in means was used to estimate the contribution of FPL to pinch strength. All 3 types of pinch strength showed a significant decrease between pre- and postblock measurements. The relative contribution of FPL for each pinch type was 56%, 44%, and 43% for key, chuck, and tip pinch, respectively. Mean grip strength did not decrease significantly. Splinting of the IP joint had no significant effect on pinch measurements. FPL paralysis resulted in a statistically significant decrease in pinch strength. IP joint immobilization to simulate IP joint fusion did not affect results. Reconstruction after acute or chronic loss of FPL function should be considered when restoration of pinch strength is important. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  13. In-duct identification of fluid-borne source with high spatial resolution

    NASA Astrophysics Data System (ADS)

    Heo, Yong-Ho; Ih, Jeong-Guon; Bodén, Hans

    2014-11-01

    Source identification of acoustic characteristics of in-duct fluid machinery is required for coping with the fluid-borne noise. By knowing the acoustic pressure and particle velocity field at the source plane in detail, the sound generation mechanism of a fluid machine can be understood. The identified spatial distribution of the strength of major radiators would be useful for the low noise design. Conventional methods for measuring the source in a wide duct have not been very helpful in investigating the source properties in detail because their spatial resolution is improper for the design purpose. In this work, an inverse method to estimate the source parameters with a high spatial resolution is studied. The theoretical formulation including the evanescent modes and near-field measurement data is given for a wide duct. After validating the proposed method to a duct excited by an acoustic driver, an experiment on a duct system driven by an air blower is conducted in the presence of flow. A convergence test for the evanescent modes is performed to find the necessary number of modes to regenerate the measured pressure field precisely. By using the converged modal amplitudes, very-close near-field pressure to the source is regenerated and compared with the measured pressure, and the maximum error was -16.3 dB. The source parameters are restored from the converged modal amplitudes. Then, the distribution of source parameters on the driver and the blower is clearly revealed with a high spatial resolution for kR<1.84 in which range only plane waves can propagate to far field in a duct. Measurement using a flush mounted sensor array is discussed, and the removal of pure radial modes in the modeling is suggested.

  14. Mechanical sea-ice strength parameterized as a function of ice temperature

    NASA Astrophysics Data System (ADS)

    Hata, Yukie; Tremblay, Bruno

    2016-04-01

    Mechanical sea-ice strength is key for a better simulation of the timing of landlock ice onset and break-up in the Canadian Arctic Archipelago (CAA). We estimate the mechanical strength of sea ice in the CAA by analyzing the position record measured by the several buoys deployed in the CAA between 2008 and 2013, and wind data from the Canadian Meteorological Centre's Global Deterministic Prediction System (CMC_GDPS) REforecasts (CGRF). First, we calculate the total force acting on the ice using the wind data. Next, we estimate upper (lower) bounds on the sea-ice strength by identifying cases when the sea ice deforms (does not deform) under the action of a given total force. Results from this analysis show that the ice strength of landlock sea ice in the CAA is approximately 40 kN/m on the landfast ice onset (in ice growth season). Additionally, it becomes approximately 10 kN/m on the landfast ice break-up (in melting season). The ice strength decreases with ice temperature increase, which is in accord with results from Johnston [2006]. We also include this new parametrization of sea-ice strength as a function of ice temperature in a coupled slab ocean sea ice model. The results from the model with and without the new parametrization are compared with the buoy data from the International Arctic Buoy Program (IABP).

  15. Analysis of Chinese emissions trends of major halocarbons in monitoring the impacts of the Montreal Protocol

    NASA Astrophysics Data System (ADS)

    Li, S.; Park, S.; Park, M.; Kim, J.; Muhle, J.; Fang, X.; Stohl, A.; Weiss, R. F.; Kim, K.

    2013-12-01

    In this study we estimate the emission rates of anthropogenic halocarbons, which include CFC-11, CFC-12, HCFC-22, HCFC-141b, HCFC-142b, HFC-23, HFC-134a, HFC-32, HFC-125 and HFC-152a for China during the period of 2008 and 2012 using an interspecies correlation method (Kim et al., 2010; Li et al., 2011), which is a unique 'top-down' approach using in situ high-precision measurements at Gosan, a remote station on Jeju Island, Korea. Mixing ratios of ambient halocarbons have been measured every two hours using a cryogenic pre-concentration system coupled with gas chromatograph and mass selective detector (GC-MSD) as part of the Advanced Global Atmospheric Gases Experiment network. We first separated air-mass segments originating from China using a back-trajectory analysis to identify Chinese emission from the observations, and found that the mixing ratios of most of compounds presented significant correlations against those of HCFC-22. Based on the correlations, we analyzed emission strengths of individual compounds, which correspond to their slopes against HCFC-22 since the slope can be a useful proxy to demonstrate their emission trends with an assumption of relatively constant emission of HCFC-22 during the analysis period. The analysis showed about 14% increase in the emissions strengths of CFCs (mainly due to CFC-12) between 2008 and 2012 in China. Interestingly, HCFC-141b and HCFC-142b that are commonly known to be used for foam blowing agents revealed opposite trends in their emission strengths: ca. 48% increase of HCFC-141b versus ca. 22% decrease of HCFC-142b, suggesting the possibility of other major sources in case of China. The emission strengths of HFCs have been increasing due to significant emissions of HFC-32, HFC-125 and HFC-134a during the analysis period. However, HFC-23 which is a well-known byproduct of HCFC-22 production processes, showed decrease by about 22% in the emission strength. Reduction in HFC-23 emissions is most likely due to the nationwide effort for the Clean Development Mechanism project benefit of the Kyoto protocol. Emission rates of the halocarbons determined from the empirical emission strengths will certainly vary according to emission trend of our reference species, HCFC-22 in China from 2008 and 2012. Annual and average of HCFC-22 emissions from 2008 to 2012 will be calculated with an inverse method based on FLEXPART transport model. More detailed discussion on the emission rate estimation and its related caveats will be made in the presentation, but overall our analysis highlights the significance of long-term continuous monitoring for CFCs, HCFCs and HFCs in China to investigate impacts of Montreal Protocol regulations.

  16. [Global Atmospheric Chemistry/Transport Modeling and Data-Analysis

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1999-01-01

    This grant supported a global atmospheric chemistry/transport modeling and data- analysis project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for trace gases; (b) utilization of these inverse methods which use either the Model for Atmospheric Chemistry and Transport (MATCH) which is based on analyzed observed winds or back- trajectories calculated from these same winds for determining regional and global source and sink strengths for long-lived trace gases important in ozone depletion and the greenhouse effect; (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple "titrating" gases; and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important ultimate goals included determination of regional source strengths of important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements, and hydrohalocarbons now used as alternatives to the above restricted halocarbons.

  17. Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1997-01-01

    This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.

  18. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  19. Fat Mass Is Positively Associated with Estimated Hip Bone Strength among Chinese Men Aged 50 Years and above with Low Levels of Lean Mass.

    PubMed

    Han, Guiyuan; Chen, Yu-Ming; Huang, Hua; Chen, Zhanyong; Jing, Lipeng; Xiao, Su-Mei

    2017-04-24

    This study investigated the relationships of fat mass (FM) and lean mass (LM) with estimated hip bone strength in Chinese men aged 50-80 years (median value: 62.0 years). A cross-sectional study including 889 men was conducted in Guangzhou, China. Body composition and hip bone parameters were generated by dual-energy X-ray absorptiometry (DXA). The relationships of the LM index (LMI) and the FM index (FMI) with bone phenotypes were detected by generalised additive models and multiple linear regression. The associations between the FMI and the bone variables in LMI tertiles were further analysed. The FMI possessed a linear relationship with greater estimated hip bone strength after adjustment for the potential confounders ( p < 0.05). Linear relationships were also observed for the LMI with most bone phenotypes, except for the cross-sectional area ( p < 0.05). The contribution of the LMI (4.0%-12.8%) was greater than that of the FMI (2.0%-5.7%). The associations between the FMI and bone phenotypes became weaker after controlling for LMI. Further analyses showed that estimated bone strength ascended with FMI in the lowest LMI tertile ( p < 0.05), but not in the subgroups with a higher LMI. This study suggested that LM played a critical role in bone health in middle-aged and elderly Chinese men, and that the maintenance of adequate FM could help to promote bone acquisition in relatively thin men.

  20. Estimation of ground motion for Bhuj (26 January 2001; Mw 7.6 and for future earthquakes in India

    USGS Publications Warehouse

    Singh, S.K.; Bansal, B.K.; Bhattacharya, S.N.; Pacheco, J.F.; Dattatrayam, R.S.; Ordaz, M.; Suresh, G.; ,; Hough, S.E.

    2003-01-01

    Only five moderate and large earthquakes (Mw ???5.7) in India-three in the Indian shield region and two in the Himalayan arc region-have given rise to multiple strong ground-motion recordings. Near-source data are available for only two of these events. The Bhuj earthquake (Mw 7.6), which occurred in the shield region, gave rise to useful recordings at distances exceeding 550 km. Because of the scarcity of the data, we use the stochastic method to estimate ground motions. We assume that (1) S waves dominate at R < 100 km and Lg waves at R ??? 100 km, (2) Q = 508f0.48 is valid for the Indian shield as well as the Himalayan arc region, (3) the effective duration is given by fc-1 + 0.05R, where fc is the corner frequency, and R is the hypocentral distance in kilometer, and (4) the acceleration spectra are sharply cut off beyond 35 Hz. We use two finite-source stochastic models. One is an approximate model that reduces to the ??2-source model at distances greater that about twice the source dimension. This model has the advantage that the ground motion is controlled by the familiar stress parameter, ????. In the other finite-source model, which is more reliable for near-source ground-motion estimation, the high-frequency radiation is controlled by the strength factor, sfact, a quantity that is physically related to the maximum slip rate on the fault. We estimate ???? needed to fit the observed Amax and Vmax data of each earthquake (which are mostly in the far field). The corresponding sfact is obtained by requiring that the predicted curves from the two models match each other in the far field up to a distance of about 500 km. The results show: (1) The ???? that explains Amax data for shield events may be a function of depth, increasing from ???50 bars at 10 km to ???400 bars at 36 km. The corresponding sfact values range from 1.0-2.0. The ???? values for the two Himalayan arc events are 75 and 150 bars (sfact = 1.0 and 1.4). (2) The ???? required to explain Vmax data is, roughly, half the corresponding value for Amax, while the same sfact explains both sets of data. (3) The available far-field Amax and Vmax data for the Bhuj mainshock are well explained by ???? = 200 and 100 bars, respectively, or, equivalently, by sfact = 1.4. The predicted Amax and Vmax in the epicentral region of this earthquake are 0.80 to 0.95 g and 40 to 55 cm/sec, respectively.

  1. Contribution of trochanteric soft tissues to fall force estimates, the factor of risk, and prediction of hip fracture risk.

    PubMed

    Bouxsein, Mary L; Szulc, Pawel; Munoz, Fracoise; Thrall, Erica; Sornay-Rendu, Elizabeth; Delmas, Pierre D

    2007-06-01

    We compared trochanteric soft tissue thickness, femoral aBMD, and the ratio of fall force to femoral strength (i.e., factor of risk) in 21 postmenopausal women with incident hip fracture and 42 age-matched controls. Reduced trochanteric soft tissue thickness, low femoral aBMD, and increased ratio of fall force to femoral strength (i.e., factor of risk) were associated with increased risk of hip fracture. The contribution of trochanteric soft tissue thickness to hip fracture risk is incompletely understood. A biomechanical approach to assessing hip fracture risk that compares forces applied to the hip during a sideways fall to femoral strength may by improved by incorporating the force-attenuating effects of trochanteric soft tissues. We determined the relationship between femoral areal BMD (aBMD) and femoral failure load in 49 human cadaveric specimens, 53-99 yr of age. We compared femoral aBMD, trochanteric soft tissue thickness, and the ratio of fall forces to bone strength (i.e., the factor of risk for hip fracture, phi), before and after accounting for the force-attenuating properties of trochanteric soft tissue in 21 postmenopausal women with incident hip fracture and 42 age-matched controls. Femoral aBMD correlated strongly with femoral failure load (r2 = 0.73-0.83). Age, height, and weight did not differ; however, women with hip fracture had lower total femur aBMD (OR = 2.06; 95% CI, 1.19-3.56) and trochanteric soft tissue thickness (OR = 1.82; 95% CI, 1.01, 3.31). Incorporation of trochanteric soft tissue thickness measurements reduced the estimates of fall forces by approximately 50%. After accounting for force-attenuating properties of trochanteric soft tissue, the ratio of fall forces to femoral strength was 50% higher in cases than controls (0.92 +/- 0.44 versus 0.65 +/- 0.50, respectively; p = 0.04). It is possible to compute a biomechanically based estimate of hip fracture risk by combining estimates of femoral strength based on an empirical relationship between femoral aBMD and bone strength in cadaveric femora, along with estimates of loads applied to the hip during a sideways fall that account for thickness of trochanteric soft tissues. Our findings suggest that trochanteric soft tissue thickness may influence hip fracture risk by attenuating forces applied to the femur during a sideways fall and provide rationale for developing improved measurements of trochanteric soft tissue and for studying a larger cohort to determine whether trochanteric soft tissue thickness contributes to hip fracture risk independently of aBMD.

  2. PAHFIT: Properties of PAH Emission

    NASA Astrophysics Data System (ADS)

    Smith, J. D.; Draine, Bruce

    2012-10-01

    PAHFIT is an IDL tool for decomposing Spitzer IRS spectra of PAH emission sources, with a special emphasis on the careful recovery of ambiguous silicate absorption, and weak, blended dust emission features. PAHFIT is primarily designed for use with full 5-35 micron Spitzer low-resolution IRS spectra. PAHFIT is a flexible tool for fitting spectra, and you can add or disable features, compute combined flux bands, change fitting limits, etc., without changing the code. PAHFIT uses a simple, physically-motivated model, consisting of starlight, thermal dust continuum in a small number of fixed temperature bins, resolved dust features and feature blends, prominent emission lines (which themselves can be blended with dust features), as well as simple fully-mixed or screen dust extinction, dominated by the silicate absorption bands at 9.7 and 18 microns. Most model components are held fixed or are tightly constrained. PAHFIT uses Drude profiles to recover the full strength of dust emission features and blends, including the significant power in the wings of the broad emission profiles. This means the resulting feature strengths are larger (by factors of 2-4) than are recovered by methods which estimate the underlying continuum using line segments or spline curves fit through fiducial wavelength anchors.

  3. Influence of Screw Length and Bone Thickness on the Stability of Temporary Implants

    PubMed Central

    Fernandes, Daniel Jogaib; Elias, Carlos Nelson; Ruellas, Antônio Carlos de Oliveira

    2015-01-01

    The purpose of this work was to study the influence of screw length and bone thickness on the stability of temporary implants. A total of 96 self-drilling temporary screws with two different lengths were inserted into polyurethane blocks (n = 66), bovine femurs (n = 18) and rabbit tibia (n = 12) with different cortical thicknesses (1 to 8 mm). Screws insertion in polyurethane blocks was assisted by a universal testing machine, torque peaks were collected by a digital torquemeter and bone thickness was monitored by micro-CT. The results showed that the insertion torque was significantly increased with the thickness of cortical bone from polyurethane (p < 0.0001), bovine (p = 0.0035) and rabbit (p < 0.05) sources. Cancellous bone improved significantly the mechanical implant stability. Insertion torque and insertion strength was successfully moduled by equations, based on the cortical/cancellous bone behavior. Based on the results, insertion torque and bone strength can be estimate in order to prevent failure of the cortical layer during temporary screw placement. The stability provided by a cortical thickness of 2 or 1 mm coupled to cancellous bone was deemed sufficient for temporary implants stability. PMID:28793582

  4. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  5. An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System

    NASA Technical Reports Server (NTRS)

    Fuhrmann, Henri D.; Stewart, Eric C.

    1996-01-01

    Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.

  6. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  7. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  8. Merits of using color and shape differentiation to improve the speed and accuracy of drug strength identification on over-the-counter medicines by laypeople.

    PubMed

    Hellier, Elizabeth; Tucker, Mike; Kenny, Natalie; Rowntree, Anna; Edworthy, Judy

    2010-09-01

    This study aimed to examine the utility of using color and shape to differentiate drug strength information on over-the-counter medicine packages. Medication errors are an important threat to patient safety, and confusions between drug strengths are a significant source of medication error. A visual search paradigm required laypeople to search for medicine packages of a particular strength from among distracter packages of different strengths, and measures of reaction time and error were recorded. Using color to differentiate drug strength information conferred an advantage on search times and accuracy. Shape differentiation did not improve search times and had only a weak effect on search accuracy. Using color to differentiate drug strength information improves drug strength identification performance. Color differentiation of drug strength information may be a useful way of reducing medication errors and improving patient safety.

  9. Estimating apparent maximum muscle stress of trunk extensor muscles in older adults using subject-specific musculoskeletal models.

    PubMed

    Burkhart, Katelyn A; Bruno, Alexander G; Bouxsein, Mary L; Bean, Jonathan F; Anderson, Dennis E

    2018-01-01

    Maximum muscle stress (MMS) is a critical parameter in musculoskeletal modeling, defining the maximum force that a muscle of given size can produce. However, a wide range of MMS values have been reported in literature, and few studies have estimated MMS in trunk muscles. Due to widespread use of musculoskeletal models in studies of the spine and trunk, there is a need to determine reasonable magnitude and range of trunk MMS. We measured trunk extension strength in 49 participants over 65 years of age, surveyed participants about low back pain, and acquired quantitative computed tomography (QCT) scans of their lumbar spines. Trunk muscle morphology was assessed from QCT scans and used to create a subject-specific musculoskeletal model for each participant. Model-predicted extension strength was computed using a trunk muscle MMS of 100 N/cm 2 . The MMS of each subject-specific model was then adjusted until the measured strength matched the model-predicted strength (±20 N). We found that measured trunk extension strength was significantly higher in men. With the initial constant MMS value, the musculoskeletal model generally over-predicted trunk extension strength. By adjusting MMS on a subject-specific basis, we found apparent MMS values ranging from 40 to 130 N/cm 2 , with an average of 75.5 N/cm 2 for both men and women. Subjects with low back pain had lower apparent MMS than subjects with no back pain. This work incorporates a unique approach to estimate subject-specific trunk MMS values via musculoskeletal modeling and provides a useful insight into MMS variation. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:498-505, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  10. Influence of Composition and Deformation Conditions on the Strength and Brittleness of Shale Rock

    NASA Astrophysics Data System (ADS)

    Rybacki, E.; Reinicke, A.; Meier, T.; Makasi, M.; Dresen, G. H.

    2015-12-01

    Stimulation of shale gas reservoirs by hydraulic fracturing operations aims to increase the production rate by increasing the rock surface connected to the borehole. Prospective shales are often believed to display high strength and brittleness to decrease the breakdown pressure required to (re-) initiate a fracture as well as slow healing of natural and hydraulically induced fractures to increase the lifetime of the fracture network. Laboratory deformation tests were performed on several, mainly European black shales with different mineralogical composition, porosity and maturity at ambient and elevated pressures and temperatures. Mechanical properties such as compressive strength and elastic moduli strongly depend on shale composition, porosity, water content, structural anisotropy, and on pressure (P) and temperature (T) conditions, but less on strain rate. We observed a transition from brittle to semibrittle deformation at high P-T conditions, in particular for high porosity shales. At given P-T conditions, the variation of compressive strength and Young's modulus with composition can be roughly estimated from the volumetric proportion of all components including organic matter and pores. We determined also brittleness index values based on pre-failure deformation behavior, Young's modulus and bulk composition. At low P-T conditions, where samples showed pronounced post-failure weakening, brittleness may be empirically estimated from bulk composition or Young's modulus. Similar to strength, at given P-T conditions, brittleness depends on the fraction of all components and not the amount of a specific component, e.g. clays, alone. Beside strength and brittleness, knowledge of the long term creep properties of shales is required to estimate in-situ stress anisotropy and the healing of (propped) hydraulic fractures.

  11. Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping

    PubMed Central

    2011-01-01

    Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359

  12. Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.

    PubMed

    Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C

    2011-10-06

    Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.

  13. Estimating roadside encroachment rates with the combined strengths of accident- and encroachment-based approaches

    DOT National Transportation Integrated Search

    2001-09-01

    In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...

  14. Impact tensile properties and strength development mechanism of glass for reinforcement fiber

    NASA Astrophysics Data System (ADS)

    Kim, T.; Oshima, K.; Kawada, H.

    2013-07-01

    In this study, impact tensile properties of E-glass were investigated by fiber bundle testing under a high strain rate. The impact tests were performed employing two types of experiments. One is the tension-type split Hopkinson pressure bar system, and the other is the universal high-speed tensile-testing machine. As the results, it was found that not only the tensile strength but also the fracture strain of E-glass fiber improved with the strain rate. The absorbed strain energy of this material significantly increased. It was also found that the degree of the strain rate dependency of E-glass fibers on the tensile strength was varied according to fiber diameter. As for the strain rate dependency of the glass fiber under tensile loading condition, change of the small crack-propagation behaviour was considered to clarify the development of the fiber strength. The tensile fiber strength was estimated by employing the numerical simulation based on the slow crack-growth model (SCG). Through the parametric study against the coefficient of the crack propagation rate, the numerical estimation value was obtained for the various testing conditions. It was concluded that the slow crack-growth behaviour in the glass fiber was an essential for the increase in the strength of this material.

  15. Character and dealing with laughter: the relation of self- and peer-reported strengths of character with gelotophobia, gelotophilia, and katagelasticism.

    PubMed

    Proyer, René T; Wellenzohn, Sara; Ruch, Willibald

    2014-01-01

    We hypothesized that gelotophobia (the fear of being laughed at), gelotophilia (the joy of being laughed at), and katagelasticism (the joy of laughing at others) relate differently to character strengths. In Study 1 (N = 5,134), self-assessed gelotophobia was primarily negatively related to strengths (especially to lower hope, zest, and love), whereas only modesty yielded positive relations. Gelotophilia demonstrated mainly positive relations with humor, zest, and social intelligence. Katagelasticism existed widely unrelated from character strengths with humor demonstrating the comparatively highest coefficients. Study 2 consisted of N = 249 participants who provided self- and peer-ratings of strengths and self-reports on the three dispositions. The results converged well with those from Study 1. When comparing self- and peer-reports, those higher in gelotophobia under-estimated and those higher in gelotophilia over-estimated their virtuousness, whereas those higher in katagelasticism seemed to have a realistic appraisal of their strengths. Peer-rated (low) hope and modesty contributed to the prediction of gelotophobia beyond self-reports. The same was true for low modesty, creativity, low bravery, and authenticity for gelotophilia and for low love of learning regarding katagelasticism. Results suggest that there is a stable relation between the way people deal with ridicule and laughing and their virtuousness.

  16. Bone volume fraction and structural parameters for estimation of mechanical stiffness and failure load of human cancellous bone samples; in-vitro comparison of ultrasound transit time spectroscopy and X-ray μCT.

    PubMed

    Alomari, Ali Hamed; Wille, Marie-Luise; Langton, Christian M

    2018-02-01

    Conventional mechanical testing is the 'gold standard' for assessing the stiffness (N mm -1 ) and strength (MPa) of bone, although it is not applicable in-vivo since it is inherently invasive and destructive. The mechanical integrity of a bone is determined by its quantity and quality; being related primarily to bone density and structure respectively. Several non-destructive, non-invasive, in-vivo techniques have been developed and clinically implemented to estimate bone density, both areal (dual-energy X-ray absorptiometry (DXA)) and volumetric (quantitative computed tomography (QCT)). Quantitative ultrasound (QUS) parameters of velocity and attenuation are dependent upon both bone quantity and bone quality, although it has not been possible to date to transpose one particular QUS parameter into separate estimates of quantity and quality. It has recently been shown that ultrasound transit time spectroscopy (UTTS) may provide an accurate estimate of bone density and hence quantity. We hypothesised that UTTS also has the potential to provide an estimate of bone structure and hence quality. In this in-vitro study, 16 human femoral bone samples were tested utilising three techniques; UTTS, micro computed tomography (μCT), and mechanical testing. UTTS was utilised to estimate bone volume fraction (BV/TV) and two novel structural parameters, inter-quartile range of the derived transit time (UTTS-IQR) and the transit time of maximum proportion of sonic-rays (TTMP). μCT was utilised to derive BV/TV along with several bone structure parameters. A destructive mechanical test was utilised to measure the stiffness and strength (failure load) of the bone samples. BV/TV was calculated from the derived transit time spectrum (TTS); the correlation coefficient (R 2 ) with μCT-BV/TV was 0.885. For predicting mechanical stiffness and strength, BV/TV derived by both μCT and UTTS provided the strongest correlation with mechanical stiffness (R 2 =0.567 and 0.618 respectively) and mechanical strength (R 2 =0.747 and 0.736 respectively). When respective structural parameters were incorporated to BV/TV, multiple regression analysis indicated that none of the μCT histomorphometric parameters could improve the prediction of mechanical stiffness and strength, while for UTTS, adding TTMP to BV/TV increased the prediction of mechanical stiffness to R 2 =0.711 and strength to R 2 =0.827. It is therefore envisaged that UTTS may have the ability to estimate BV/TV along with providing an improved prediction of osteoporotic fracture risk, within routine clinical practice in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Neutron monitoring and electrode calorimetry experiments in the HIP-1 Hot Ion Plasma

    NASA Technical Reports Server (NTRS)

    Reinmann, J. J.; Layman, R. W.

    1977-01-01

    Results are presented for two diagnostic procedures on HIP-1: neutron diagnostics to determine where neutrons originated within the plasma discharge chamber and electrode calorimetry to measure the steady-state power absorbed by the two anodes and cathodes. Results are also reported for a hot-ion plasma formed with a continuous-cathode rod, one that spans the full length of the test section, in place of the two hollow cathodes. The outboard neutron source strength increased relative to that at the midplane when (1) the cathode tips were moved farther outboard, (2) the anode diameters were increased, and (3) one of the anodes was removed. The distribution of neutron sources within the plasma discharge chamber was insensitive to the division of current between the two cathodes. For the continuous cathode, increasing the discharge current increased the midplane neutron source strength relative to the outboard source strength. Each cathode absorbed from 12 to 15 percent of the input power regardless of the division of current between the cathodes. The anodes absorbed from 20 to 40 percent of the input power. The division of power absorption between the anodes varied with plasma operating conditions and electrode placement.

  18. Information spreading by a combination of MEG source estimation and multivariate pattern classification.

    PubMed

    Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.

  19. Information spreading by a combination of MEG source estimation and multivariate pattern classification

    PubMed Central

    Sato, Masashi; Yamashita, Okito; Sato, Masa-aki

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968

  20. Large-scale, near-Earth, magnetic fields from external sources and the corresponding induced internal field

    NASA Technical Reports Server (NTRS)

    Langel, R. A.; Estes, R. H.

    1983-01-01

    Data from MAGSAT analyzed as a function of the Dst index to determine the first degree/order spherical harmonic description of the near-Earth external field and its corresponding induced field. The analysis was done separately for data from dawn and dusk. The MAGSAT data was compared with POGO data. A local time variation of the external field persists even during very quiet magnetic conditions; both a diurnal and 8-hour period are present. A crude estimate of Sq current in the 45 deg geomagnetic latitude range is obtained for 1966 to 1970. The current strength, located in the ionosphere and induced in the Earth, is typical of earlier determinations from surface data, although its maximum is displaced in local time from previous results.

  1. Large-scale, near-field magnetic fields from external sources and the corresponding induced internal field

    NASA Technical Reports Server (NTRS)

    Langel, R. A.; Estes, R. H.

    1985-01-01

    Data from Magsat analyzed as a function of the Dst index to determine the first degree/order spherical harmonic description of the near-earth external field and its corresponding induced field. The analysis was done separately for data from dawn and dusk. The Magsat data was compared with POGO data. A local time variation of the external field persists even during very quiet magnetic conditions; both a diurnal and 8-hour period are present. A crude estimate of Sq current in the 45 deg geomagnetic latitude range is obtained for 1966 to 1970. The current strength, located in the ionosphere and induced in the earth, is typical of earlier determinations from surface data, although its maximum is displaced in local time from previous results.

  2. An experimental study of search in global social networks.

    PubMed

    Dodds, Peter Sheridan; Muhamad, Roby; Watts, Duncan J

    2003-08-08

    We report on a global social-search experiment in which more than 60,000 e-mail users attempted to reach one of 18 target persons in 13 countries by forwarding messages to acquaintances. We find that successful social search is conducted primarily through intermediate to weak strength ties, does not require highly connected "hubs" to succeed, and, in contrast to unsuccessful social search, disproportionately relies on professional relationships. By accounting for the attrition of message chains, we estimate that social searches can reach their targets in a median of five to seven steps, depending on the separation of source and target, although small variations in chain lengths and participation rates generate large differences in target reachability. We conclude that although global social networks are, in principle, searchable, actual success depends sensitively on individual incentives.

  3. Lifetimes and f-values of the D 2Σ- ← X 2Π system of OH and OD

    NASA Astrophysics Data System (ADS)

    Heays, Alan; de Oliveira, Nelson; Gans, Bérenger; Ito, Kenji; Boyé-Péronne, Séverine; Douin, Stéphane; Hickson, Kevin; Nahon, Laurent; Loison, Jean-Christophe

    2017-10-01

    The OH radical is abundant in the interstellar medium and cometary comae, where it plays a significant role in the photochemical cycle of water. Also, the oxidising potential of the Earth atmosphere is influenced by this molecule. The OH lifetime in the presence of ultraviolet radiation is of prime interest in all these locations. The vacuum-ultraviolet absorption of the D 2Σ- ← X 2Π system contributes to a reduction of this lifetime. It also provides an independent way to observe the OH molecule in the interstellar medium. But a reliable oscillator strength (f-value) is needed. Vacuum-ultraviolet absorption of the D 2Σ- ← X 2Π system of OH and OD was recorded with high spectral resolution in a plasma-discharge radical source and using synchrotron radiation coupled to the unique ultraviolet Fourier-transform spectrometer on the DESIRS beamline of synchrotron SOLEIL. Line oscillator strengths are absolutely calibrated with respect to the well-known A 2Σ+ ← X 2Π system. The new oscillator strength decreases the best-estimate lifetime of OH in an interstellar radiation field and reduces its uncertainty. We also measured line broadening of the excited D 2Σ- v=0 and 1 levels for the first time and find a lifetime for these states which is 5 times shorter than theoretically predicted.This new data will aid in the interpretation of astronomical observations and help improve photochemical models in many contexts.

  4. Unequal-Strength Source zROC Slopes Reflect Criteria Placement and Not (Necessarily) Memory Processes

    ERIC Educational Resources Information Center

    Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.

    2013-01-01

    Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose 2 decision mechanisms that can produce the slope…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  6. Investigation of Greenhouse Gas Emissions by Surface, Airborne, and Satellite on Local to Continental-Scale

    NASA Astrophysics Data System (ADS)

    Leifer, I.; Tratt, D. M.; Egland, E. T.; Gerilowski, K.; Vigil, S. A.; Buchwitz, M.; Krings, T.; Bovensmann, H.; Krautwurst, S.; Burrows, J. P.

    2013-12-01

    In situ meteorological observations, including 10-m winds (U), in conjunction with greenhouse gas (GHG - methane, carbon dioxide, water vapor) measurements by continuous wave Cavity Enhanced Absorption Spectroscopy (CEAS) were conducted onboard two specialized platforms: MACLab (Mobile Atmospheric Composition Laboratory in a RV) and AMOG Surveyor (AutoMObile Greenhouse gas) - a converted commuter automobile. AMOG Surveyor data were collected for numerous southern California sources including megacity, geology, fossil fuel industrial, animal husbandry, and landfill operations. MACLab investigated similar sources along with wetlands on a transcontinental scale from California to Florida to Nebraska covering more than 15,000 km. Custom software allowing real-time, multi-parameter data visualization (GHGs, water vapor, temperature, U, etc.) improved plume characterization and was applied to large urban area and regional-scale sources. The capabilities demonstrated permit calculation of source emission strength, as well as enable documenting microclimate variability. GHG transect data were compared with airborne HyperSpectral Imaging data to understand temporal and spatial variability and to ground-truth emission strength derived from airborne imagery. These data also were used to validate satellite GHG products from SCIAMACHY (2003-2005) and GOSAT (2009-2013) that are currently being analyzed to identify significant decadal-scale changes in North American GHG emission patterns resulting from changes in anthropogenic and natural sources. These studies lay the foundation for the joint ESA/NASA COMEX campaign that will map GHG plumes by remote sensing and in situ measurements for a range of strong sources to derive emission strength through inverse plume modeling. COMEX is in support of the future GHG monitoring satellites, such as CarbonSat and HyspIRI. GHG transect data were compared with airborne HyperSpectral Imaging data to understand temporal and spatial variability and to ground-truth emission strength derived from airborne imagery. These data also were used to validate satellite GHG products from SCIAMACHY (2003-2005) and GOSAT (2009-2013) that are currently being analyzed to identify significant decadal-scale changes in North American GHG emission patterns resulting from changes in anthropogenic and natural sources. These studies lay the foundation for the joint ESA/NASA COMEX campaign that will map GHG plumes by remote sensing and in situ measurements for a range of strong sources to derive emission strength through inverse plume modeling. COMEX is in support of the future GHG monitoring satellites, such as CarbonSat and HyspIRI.

  7. Effect of solute interactions in columbium /Nb/ on creep strength

    NASA Technical Reports Server (NTRS)

    Klein, M. J.; Metcalfe, A. G.

    1973-01-01

    The creep strength of 17 ternary columbium (Nb)-base alloys was determined using an abbreviated measuring technique, and the results were analyzed to identify the contributions of solute interactions to creep strength. Isostrength creep diagrams and an interaction strengthening parameter, ST, were used to present and analyze data. It was shown that the isostrength creep diagram can be used to estimate the creep strength of untested alloys and to identify compositions with the most economical use of alloy elements. Positive values of ST were found for most alloys, showing that interaction strengthening makes an important contribution to the creep strength of these ternary alloys.

  8. Scale dependency of fracture energy and estimates thereof via dynamic rupture solutions with strong thermal weakening

    NASA Astrophysics Data System (ADS)

    Viesca, R. C.; Garagash, D.

    2013-12-01

    Seismological estimates of fracture energy show a scaling with the total slip of an earthquake [e.g., Abercrombie and Rice, GJI 2005]. Potential sources for this scale dependency are coseismic fault strength reductions that continue with increasing slip or an increasing amount of off-fault inelastic deformation with dynamic rupture propagation [e.g., Andrews, JGR 2005; Rice, JGR 2006]. Here, we investigate the former mechanism by solving for the slip dependence of fracture energy at the crack tip of a dynamically propagating rupture in which weakening takes place by strong reductions of friction via flash heating of asperity contacts and thermal pressurization of pore fluid leading to reductions in effective normal stress. Laboratory measurements of small characteristic slip evolution distances for friction (~10 μm at low slip rates of μm-mm/s, possibly up to 1 mm for slip rates near 0.1 m/s) [e.g., Marone and Kilgore, Nature 1993; Kohli et al., JGR 2011] imply that flash weakening of friction occurs at small slips before any significant thermal pressurization and may thus have a negligible contribution to the total fracture energy [Brantut and Rice, GRL 2011; Garagash, AGU 2011]. The subsequent manner of weakening under thermal pressurization (the dominant contributor to fracture energy) spans a range of behavior from the deformation of a finite-thickness shear zone in which diffusion is negligible (i.e., undrained-adiabatic) to that in which large-scale diffusion obscures the existence of a thin shear zone and thermal pressurization effectively occurs by the heating of slip on a plane. Separating the contribution of flash heating, the dynamic rupture solutions reduce to a problem with a single parameter, which is the ratio of the undrained-adiabatic slip-weakening distance (δc) to the characteristic slip-on-a-plane slip-weakening distance (L*). However, for any value of the parameter, there are two end-member scalings of the fracture energy: for small slip, the undrained-adiabatic behavior expectedly results in fracture energy scaling as G ~ δ^2, and for large slip (where TP approaches slip on a plane) we find that G ~ δ^(2/3). This last result is a slight correction to estimates made assuming a constant, kinematically imposed slip rate and slip-on-a-plane TP resulting in G ~ δ^(1/2) [Rice, JGR 2006]. We compile fracture energy estimates of both continental and subduction zone earthquakes. In doing so, we incorporate independent estimates of fault prestress to distinguish fracture energy G from the parameter G' defined by Abercrombie and Rice [2005], which represents the energetic quantity that is most directly inferred following seismological estimates of radiated energy, seismic moment and source radius. We find that the dynamic rupture solutions (which account for the variable manner of thermal pressurization and result in a self-consistent slip rate history) allow for a close match of the estimated fracture energy over several orders of total event slip, further supporting the proposed explanation that fracture energy scaling may largely be attributed to a fault strength that weakens gradually with slip, and additionally, the potential prevalence of thermal pressurization.

  9. Absolute measurement of LDR brachytherapy source emitted power: Instrument design and initial measurements.

    PubMed

    Malin, Martha J; Palmer, Benjamin R; DeWerd, Larry A

    2016-02-01

    Energy-based source strength metrics may find use with model-based dose calculation algorithms, but no instruments exist that can measure the energy emitted from low-dose rate (LDR) sources. This work developed a calorimetric technique for measuring the power emitted from encapsulated low-dose rate, photon-emitting brachytherapy sources. This quantity is called emitted power (EP). The measurement methodology, instrument design and performance, and EP measurements made with the calorimeter are presented in this work. A calorimeter operating with a liquid helium thermal sink was developed to measure EP from LDR brachytherapy sources. The calorimeter employed an electrical substitution technique to determine the power emitted from the source. The calorimeter's performance and thermal system were characterized. EP measurements were made using four (125)I sources with air-kerma strengths ranging from 2.3 to 5.6 U and corresponding EPs of 0.39-0.79 μW, respectively. Three Best Medical 2301 sources and one Oncura 6711 source were measured. EP was also computed by converting measured air-kerma strengths to EPs through Monte Carlo-derived conversion factors. The measured EP and derived EPs were compared to determine the accuracy of the calorimeter measurement technique. The calorimeter had a noise floor of 1-3 nW and a repeatability of 30-60 nW. The calorimeter was stable to within 5 nW over a 12 h measurement window. All measured values agreed with derived EPs to within 10%, with three of the four sources agreeing to within 4%. Calorimeter measurements had uncertainties ranging from 2.6% to 4.5% at the k = 1 level. The values of the derived EPs had uncertainties ranging from 2.9% to 3.6% at the k = 1 level. A calorimeter capable of measuring the EP from LDR sources has been developed and validated for (125)I sources with EPs between 0.43 and 0.79 μW.

  10. Life estimation and analysis of dielectric strength, hydrocarbon backbone and oxidation of high voltage multi stressed EPDM composites

    NASA Astrophysics Data System (ADS)

    Khattak, Abraiz; Amin, Muhammad; Iqbal, Muhammad; Abbas, Naveed

    2018-02-01

    Micro and nanocomposites of ethylene propylene diene monomer (EPDM) are recently studied for different characteristics. Study on life estimation and effects of multiple stresses on its dielectric strength and backbone scission and oxidation is also vital for endorsement of these composites for high voltage insulation and other outdoor applications. In order to achieve these goals, unfilled EPDM and its micro and nanocomposites are prepared at 23 phr micro silica and 6 phr nanosilica loadings respectively. Prepared samples are energized at 2.5 kV AC voltage and subjected for a long time to heat, ultraviolet radiation, acid rain, humidity and salt fog in accelerated manner in laboratory. Dielectric strength, leakage current and intensity of saturated backbone and carbonyl group are periodically measured. Loss in dielectric strength, increase in leakage current and backbone degradation and oxidation were observed in all samples. These effects were least in the case of EPDM nanocomposite. The nanocomposite sample also demonstrated longest shelf life.

  11. The 'Arm Force Field' method to predict manual arm strength based on only hand location and force direction.

    PubMed

    La Delfa, Nicholas J; Potvin, Jim R

    2017-03-01

    This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r 2  = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r 2  = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Method and apparatus for imparting strength to a material using sliding loads

    DOEpatents

    Hughes, Darcy Anne; Dawson, Daniel B.; Korellis, John S.

    1999-01-01

    A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: 1) asperity interactions and 2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example.

  13. Method And Apparatus For Imparting Strength To Materials Using Sliding Loads

    DOEpatents

    Hughes, Darcy Anne; Dawson, Daniel B.; Korellis, John S.

    1999-03-16

    A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: 1) asperity interactions and 2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example.

  14. Grinding damage assessment for CAD-CAM restorative materials.

    PubMed

    Curran, Philippe; Cattani-Lorente, Maria; Anselm Wiskott, H W; Durual, Stéphane; Scherrer, Susanne S

    2017-03-01

    To assess surface/subsurface damage after grinding with diamond discs on five CAD-CAM restorative materials and to estimate potential losses in strength based on crack size measurements of the generated damage. The materials tested were: Lithium disilicate (LIT) glass-ceramic (e.max CAD), leucite glass-ceramic (LEU) (Empress CAD), feldspar ceramic (VM2) (Vita Mark II), feldspar ceramic-resin infiltrated (EN) (Enamic) and a composite reinforced with nano ceramics (LU) (Lava Ultimate). Specimens were cut from CAD-CAM blocs and pair-wise mirror polished for the bonded interface technique. Top surfaces were ground with diamond discs of respectively 75, 54 and 18μm. Chip damage was measured on the bonded interface using SEM. Fracture mechanics relationships were used to estimate fracture stresses based on average and maximum chip depths assuming these to represent strength limiting flaws subjected to tension and to calculate potential losses in strength compared to manufacturer's data. Grinding with a 75μm diamond disc induced on a bonded interface critical chips averaging 100μm with a potential strength loss estimated between 33% and 54% for all three glass-ceramics (LIT, LEU, VM2). The softer materials EN and LU were little damage susceptible with chips averaging respectively 26μm and 17μm with no loss in strength. Grinding with 18μm diamond discs was still quite detrimental for LIT with average chip sizes of 43μm and a potential strength loss of 42%. It is essential to understand that when grinding glass-ceramics or feldspar ceramics with diamond discs surface and subsurface damage are induced which have the potential of lowering the strength of the ceramic. Careful polishing steps should be carried out after grinding especially when dealing with glass-ceramics. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  15. Relationships between self-reported lifetime physical activity, estimates of current physical fitness, and aBMD in adult premenopausal women.

    PubMed

    Greenway, Kathleen G; Walkley, Jeff W; Rich, Peter A

    2015-01-01

    Osteoporosis is common, and physical activity is important in its prevention and treatment. Of the categories of historical physical activity (PA) examined, we found that weight-bearing and very hard physical activity had the strongest relationships with areal bone mineral density (aBMD) throughout growth and into adulthood, while for measures of strength, only grip strength proved to be an independent predictor of aBMD. To examine relationships between aBMD (total body, lumbar spine, proximal femur, tibial shaft, distal radius) and estimates of historical PA, current strength, and cardiovascular fitness in adult premenopausal women. One hundred fifty-two adult premenopausal women (40 ± 9.6 years) undertook aBMD (dual-energy X-ray absorptiometry (DXA)) and completed surveys to estimate historical physical activity representative of three decades (Kriska et al. [1]), while subsets underwent functional tests of isokinetic strength (hamstrings and quadriceps), grip strength (hand dynamometer), and maximum oxygen uptake (MaxV02; cycle ergometer). Historical PA was characterized by demand (metabolic equivalents, PA > 3 METS; PA > 7 METS) and type (weight-bearing; high impact). Significant positive independent predictors varied by decade and site, with weight-bearing exercise and PA > 3 METS significant for the tibial shaft (10-19 decade) and only PA > 7 METS significant for the final two decades (20-29 and 30-39 years; total body and total hip). A significant negative correlation between high impact activity and tibial shaft aBMD appeared for the final decade. For strength measures, only grip strength was an independent predictor (total body, total hip), while MaxV02 provided a significant independent prediction for the tibial shaft. Past PA > 7 METS was positively associated with aBMD, and such activity should probably constitute a relatively high proportion of all weekly PA to positively affect aBMD. The findings warrant more detailed investigations in a prospective study, specifically also investigating the potentially negative effects of high impact PA on tibial aBMD.

  16. The Strength of the Metal. Aluminum Oxide Interface

    NASA Technical Reports Server (NTRS)

    Pepper, S. V.

    1984-01-01

    The strength of the interface between metals and aluminum oxide is an important factor in the successful operation of devices found throughout modern technology. One finds the interface in machine tools, jet engines, and microelectronic integrated circuits. The strength of the interface, however, should be strong or weak depending on the application. The diverse technological demands have led to some general ideas concerning the origin of the interfacial strength, and have stimulated fundamental research on the problem. Present status of our understanding of the source of the strength of the metal - aluminum oxide interface in terms of interatomic bonds are reviewed. Some future directions for research are suggested.

  17. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  18. Electromagnetic exposure compliance estimation using narrowband directional measurements.

    PubMed

    Stratakis, D; Miaoudakis, A; Xenos, T; Zacharopoulos, V

    2008-01-01

    The increased number of everyday applications that rely on wireless communication has drawn an attention to several concerns on the adverse health effects that prolonged or even short time exposure might have on humans. International organisations and countries have adopted guides and legislation for the public safety. They include reference levels (RLs) regarding field strength electromagnetic quantities. To check for RLs compliance in an environment with multiple transmitters of various types, analytical simulation models may be implemented provided that all the necessary information are available. Since this is not generally the case in the most practical situations, on-site measurements have to be performed. The necessary equipment for measurements of this type usually includes broadband field metres suitable to measure the field strength over the whole bandwidth of the field sensor used. These types of measurements have several drawbacks; to begin with, given that RLs are frequency depended, compliance evaluation can be misleading since no information is available regarding the measured spectrum distribution. Furthermore, in a multi-transmitter environment there is no way of distinguishing the contribution of a specific source to the overall field measured. Of course, this problem can be resolved using narrowband directional receiver antennas, yet there is always the need for a priori knowledge of the polarisation of the incident electromagnetic wave. In this work, the use of measurement schemes of this type is addressed. A method independent to the polarisation of the incident wave is proposed and a way to evaluate a single source contribution to the total field in a multi-transmitter environment and the polarisation of the measured incident wave is presented.

  19. Shear bond strength to enamel after power bleaching activated by different sources.

    PubMed

    Can-Karabulut, Deniz C; Karabulut, Baris

    2010-01-01

    The purpose of the present study was to evaluate enamel bond strength of a composite resin material after hydrogen peroxide bleaching, activated by a diode laser (LaserSmile), an ozone device (HealOzone), a light-emitting diode (BT Cool whitening system), and a quartz-Plus. Fifty extracted caries-free permanent incisors were used in this study. Thirty-eight percent hydrogen peroxidegel was applied to sound, flattened labial enamel surfaces and activated by different sources. Enamel surfaces that had received no treatment were used as control samples. Bonding agent was applied according to the manufacturer's instructions and the adhesion test was performed according to ISO/TS 11405. Statistical analysis showed significant influence of the different activation technique of hydrogen peroxide on shear bond strength to enamel (ANOVA, LSD, P < 0.05). The data in this vitro explorative study suggest the activation of hydrogen peroxide by different sources may further affect the shear bond strength of subsequent composite resin restoration to enamel. Within the limitations of this in vitro study, further studies examining the structural changes of activated hydrogen peroxide-treated enamel are needed. Due to the different activation methods; duration of light irradiation effects, longer time periods may be needed before application of adhesive restorations to enamel, compared with non-activated bleaching.

  20. Origin of acoustic emission produced during single point machining

    NASA Astrophysics Data System (ADS)

    Heiple, C. R.; Carpenter, S. H.; Armentrout, D. L.

    1991-05-01

    Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emission produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent.

Top