Sample records for empirical correction factor

  1. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  2. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  3. SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watt, E; Spencer, DP; Meyer, T

    Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less

  4. High-resolution gamma ray attenuation density measurements on mining exploration drill cores, including cut cores

    NASA Astrophysics Data System (ADS)

    Ross, P.-S.; Bourke, A.

    2017-01-01

    Physical property measurements are increasingly important in mining exploration. For density determinations on rocks, one method applicable on exploration drill cores relies on gamma ray attenuation. This non-destructive method is ideal because each measurement takes only 10 s, making it suitable for high-resolution logging. However calibration has been problematic. In this paper we present new empirical, site-specific correction equations for whole NQ and BQ cores. The corrections force back the gamma densities to the "true" values established by the immersion method. For the NQ core caliber, the density range extends to high values (massive pyrite, 5 g/cm3) and the correction is thought to be very robust. We also present additional empirical correction factors for cut cores which take into account the missing material. These "cut core correction factors", which are not site-specific, were established by making gamma density measurements on truncated aluminum cylinders of various residual thicknesses. Finally we show two examples of application for the Abitibi Greenstone Belt in Canada. The gamma ray attenuation measurement system is part of a multi-sensor core logger which also determines magnetic susceptibility, geochemistry and mineralogy on rock cores, and performs line-scan imaging.

  5. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  6. Design of exchange-correlation functionals through the correlation factor approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlíková Přecechtělová, Jana, E-mail: j.precechtelova@gmail.com, E-mail: Matthias.Ernzerhof@UMontreal.ca; Institut für Chemie, Theoretische Chemie / Quantenchemie, Sekr. C7, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin; Bahmann, Hilke

    The correlation factor model is developed in which the spherically averaged exchange-correlation hole of Kohn-Sham theory is factorized into an exchange hole model and a correlation factor. The exchange hole model reproduces the exact exchange energy per particle. The correlation factor is constructed in such a manner that the exchange-correlation energy correctly reduces to exact exchange in the high density and rapidly varying limits. Four different correlation factor models are presented which satisfy varying sets of physical constraints. Three models are free from empirical adjustments to experimental data, while one correlation factor model draws on one empirical parameter. The correlationmore » factor models are derived in detail and the resulting exchange-correlation holes are analyzed. Furthermore, the exchange-correlation energies obtained from the correlation factor models are employed to calculate total energies, atomization energies, and barrier heights. It is shown that accurate, non-empirical functionals can be constructed building on exact exchange. Avenues for further improvements are outlined as well.« less

  7. Empirical Correction for Differences in Chemical Exchange Rates in Hydrogen Exchange-Mass Spectrometry Measurements.

    PubMed

    Toth, Ronald T; Mills, Brittney J; Joshi, Sangeeta B; Esfandiary, Reza; Bishop, Steven M; Middaugh, C Russell; Volkin, David B; Weis, David D

    2017-09-05

    A barrier to the use of hydrogen exchange-mass spectrometry (HX-MS) in many contexts, especially analytical characterization of various protein therapeutic candidates, is that differences in temperature, pH, ionic strength, buffering agent, or other additives can alter chemical exchange rates, making HX data gathered under differing solution conditions difficult to compare. Here, we present data demonstrating that HX chemical exchange rates can be substantially altered not only by the well-established variables of temperature and pH but also by additives including arginine, guanidine, methionine, and thiocyanate. To compensate for these additive effects, we have developed an empirical method to correct the hydrogen-exchange data for these differences. First, differences in chemical exchange rates are measured by use of an unstructured reporter peptide, YPI. An empirical chemical exchange correction factor, determined by use of the HX data from the reporter peptide, is then applied to the HX measurements obtained from a protein of interest under different solution conditions. We demonstrate that the correction is experimentally sound through simulation and in a proof-of-concept experiment using unstructured peptides under slow-exchange conditions (pD 4.5 at ambient temperature). To illustrate its utility, we applied the correction to HX-MS excipient screening data collected for a pharmaceutically relevant IgG4 mAb being characterized to determine the effects of different formulations on backbone dynamics.

  8. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    DOEpatents

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blunden, P. G.; Melnitchouk, W.

    We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.

  10. The Empirical Verification of an Assignment of Items to Subtests: The Oblique Multiple Group Method versus the Confirmatory Common Factor Method

    ERIC Educational Resources Information Center

    Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.; ten Berge, Jos M. F.

    2008-01-01

    This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of items to subtests. Another method is the oblique…

  11. Monte Carlo modeling of fluorescence in semi-infinite turbid media

    NASA Astrophysics Data System (ADS)

    Ong, Yi Hong; Finlay, Jarod C.; Zhu, Timothy C.

    2018-02-01

    The incident field size and the interplay of absorption and scattering can influence the in-vivo light fluence rate distribution and complicate the absolute quantification of fluorophore concentration in-vivo. In this study, we use Monte Carlo simulations to evaluate the effect of incident beam radius and optical properties to the fluorescence signal collected by isotropic detector placed on the tissue surface. The optical properties at the excitation and emission wavelengths are assumed to be identical. We compute correction factors to correct the fluorescence intensity for variations due to incident field size and optical properties. The correction factors are fitted to a 4-parameters empirical correction function and the changes in each parameter are compared for various beam radius over a range of physiologically relevant tissue optical properties (μa = 0.1 - 1 cm-1 , μs'= 5 - 40 cm-1 ).

  12. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  13. Modeling bias and variation in the stochastic processes of small RNA sequencing

    PubMed Central

    Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-01-01

    Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495

  14. Air-braked cycle ergometers: validity of the correction factor for barometric pressure.

    PubMed

    Finn, J P; Maxwell, B F; Withers, R T

    2000-10-01

    Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.

  15. Molecular Volumes and the Stokes-Einstein Equation

    ERIC Educational Resources Information Center

    Edward, John T.

    1970-01-01

    Examines the limitations of the Stokes-Einstein equation as it applies to small solute molecules. Discusses molecular volume determinations by atomic increments, molecular models, molar volumes of solids and liquids, and molal volumes. Presents an empirical correction factor for the equation which applies to molecular radii as small as 2 angstrom…

  16. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  17. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    PubMed

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  18. Comparison of silver and molybdenum microfocus X-ray sources for single-crystal structure determination.

    PubMed

    Krause, Lennard; Herbst-Irmer, Regine; Sheldrick, George M; Stalke, Dietmar

    2015-02-01

    The quality of diffraction data obtained using silver and molybdenum microsources has been compared for six model compounds with a wide range of absorption factors. The experiments were performed on two 30 W air-cooled Incoatec IµS microfocus sources with multilayer optics mounted on a Bruker D8 goniometer with a SMART APEX II CCD detector. All data were analysed, processed and refined using standard Bruker software. The results show that Ag  K α radiation can be beneficial when heavy elements are involved. A numerical absorption correction based on the positions and indices of the crystal faces is shown to be of limited use for the highly focused microsource beams, presumably because the assumption that the crystal is completely bathed in a (top-hat profile) beam of uniform intensity is no longer valid. Fortunately the empirical corrections implemented in SADABS , although originally intended as a correction for absorption, also correct rather well for the variations in the effective volume of the crystal irradiated. In three of the cases studied (two Ag and one Mo) the final SHELXL R 1 against all data after application of empirical corrections implemented in SADABS was below 1%. Since such corrections are designed to optimize the agreement of the intensities of equivalent reflections with different paths through the crystal but the same Bragg 2θ angles, a further correction is required for the 2θ dependence of the absorption. For this, SADABS uses the transmission factor of a spherical crystal with a user-defined value of μ r (where μ is the linear absorption coefficient and r is the effective radius of the crystal); the best results are obtained when r is biased towards the smallest crystal dimension. The results presented here suggest that the IUCr publication requirement that a numerical absorption correction must be applied for strongly absorbing crystals is in need of revision.

  19. Correcting intensity loss errors in the absence of texture-free reference samples during pole figure measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, Ahmed A., E-mail: asaleh@uow.edu.au

    Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less

  20. Dispersive approach to two-photon exchange in elastic electron-proton scattering

    DOE PAGES

    Blunden, P. G.; Melnitchouk, W.

    2017-06-14

    We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.

  1. Comment on “Empirical determination of depth-distance corrections for mb and MW from Global Seismograph Network Stations” By Guust Nolet, Steve Krueger and Robert M. Clouser

    NASA Astrophysics Data System (ADS)

    Murphy, J. R.; McLaughlin, K. L.

    1998-12-01

    In their recent article, Nolet et al. (1998) presented an analysis which led them to conclude that the epicentral distance and focal depth correction factors for mb which were previously published by Veith and Clawson (1972) are not accurate for events with focal depths greater than 100 km. In this brief commentary, we present some independent evidence which has led us to conclude that the Veith/Clawson (V/C) corrections are in fact quite accurate, at least for seismic events having focal depths of less than about 400 km.

  2. Digital particle image velocimetry measurements of the downwash distribution of a desert locust Schistocerca gregaria

    PubMed Central

    Bomphrey, Richard J; Taylor, Graham K; Lawson, Nicholas J; Thomas, Adrian L.R

    2005-01-01

    Actuator disc models of insect flight are concerned solely with the rate of momentum transfer to the air that passes through the disc. These simple models assume that an even pressure is applied across the disc, resulting in a uniform downwash distribution. However, a correction factor, k, is often included to correct for the difference in efficiency between the assumed even downwash distribution, and the real downwash distribution. In the absence of any empirical measurements of the downwash distribution behind a real insect, the values of k used in the literature have been necessarily speculative. Direct measurement of this efficiency factor is now possible, and could be used to compare the relative efficiencies of insect flight across the Class. Here, we use Digital Particle Image Velocimetry to measure the instantaneous downwash distribution, mid-downstroke, of a tethered desert locust (Schistocerca gregaria). By integrating the downwash distribution, we are thereby able to provide the first direct empirical measurement of k for an insect. The measured value of k=1.12 corresponds reasonably well with that predicted by previous theoretical studies. PMID:16849240

  3. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

  4. SU-F-BRE-14: Uncertainty Analysis for Dose Measurements Using OSLD NanoDots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Alvarez, P; Stingo, F

    2014-06-15

    Purpose: Optically stimulated luminescent dosimeters (OSLD) are an increasingly popular dosimeter for research and clinical applications. It is also used by the Radiological Physics Center for remote auditing of machine output. In this work we robustly calculated the reproducibility and uncertainty of the OSLD nanoDot. Methods: For the RPC dose calculation, raw readings are corrected for depletion, element sensitivity, fading, linearity, and energy. System calibration is determined for the experimental OSLD irradiated at different institutions by using OSLD irradiated by the RPC under reference conditions (i.e., standards): 1 Gy in a Cobalt beam. The intra-dot and inter-dot reproducibilities (coefficient ofmore » variation) were determined from the history of RPC readings of these standards. The standard deviation of the corrected OSLD signal was then calculated analytically using a recursive formalism that did not rely on the normality assumption of the underlying uncertainties, or on any type of mathematical approximation. This analytical uncertainty was compared to that empirically estimated from >45,000 RPC beam audits. Results: The intra-dot variability was found to be 0.59%, with only a small variation between readers. Inter-dot variability was found to be 0.85%. The uncertainty in each of the individual correction factors was empirically determined. When the raw counts from each OSLD were adjusted for the appropriate correction factors, the analytically determined coefficient of variation was 1.8% over a range of institutional irradiation conditions that are seen at the RPC. This is reasonably consistent with the empirical observations of the RPC, where the coefficient of variation of the measured beam outputs is 1.6% (photons) and 1.9% (electrons). Conclusion: OSLD nanoDots provide sufficiently good precision for a wide range of applications, including the RPC remote monitoring program for megavoltage beams. This work was supported by PHS grant CA10953 awarded by the NIH (DHHS)« less

  5. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  6. Application of Statistical Thermodynamics To Predict the Adsorption Properties of Polypeptides in Reversed-Phase HPLC.

    PubMed

    Tarasova, Irina A; Goloborodko, Anton A; Perlova, Tatyana Y; Pridatchenko, Marina L; Gorshkov, Alexander V; Evreinov, Victor V; Ivanov, Alexander R; Gorshkov, Mikhail V

    2015-07-07

    The theory of critical chromatography for biomacromolecules (BioLCCC) describes polypeptide retention in reversed-phase HPLC using the basic principles of statistical thermodynamics. However, whether this theory correctly depicts a variety of empirical observations and laws introduced for peptide chromatography over the last decades remains to be determined. In this study, by comparing theoretical results with experimental data, we demonstrate that the BioLCCC: (1) fits the empirical dependence of the polypeptide retention on the amino acid sequence length with R(2) > 0.99 and allows in silico determination of the linear regression coefficients of the log-length correction in the additive model for arbitrary sequences and lengths and (2) predicts the distribution coefficients of polypeptides with an accuracy from 0.98 to 0.99 R(2). The latter enables direct calculation of the retention factors for given solvent compositions and modeling of the migration dynamics of polypeptides separated under isocratic or gradient conditions. The obtained results demonstrate that the suggested theory correctly relates the main aspects of polypeptide separation in reversed-phase HPLC.

  7. Power considerations for λ inflation factor in meta-analyses of genome-wide association studies.

    PubMed

    Georgiopoulos, Georgios; Evangelou, Evangelos

    2016-05-19

    The genomic control (GC) approach is extensively used to effectively control false positive signals due to population stratification in genome-wide association studies (GWAS). However, GC affects the statistical power of GWAS. The loss of power depends on the magnitude of the inflation factor (λ) that is used for GC. We simulated meta-analyses of different GWAS. Minor allele frequency (MAF) ranged from 0·001 to 0·5 and λ was sampled from two scenarios: (i) random scenario (empirically-derived distribution of real λ values) and (ii) selected scenario from simulation parameter modification. Adjustment for λ was considered under single correction (within study corrected standard errors) and double correction (additional λ corrected summary estimate). MAF was a pivotal determinant of observed power. In random λ scenario, double correction induced a symmetric power reduction in comparison to single correction. For MAF 1·2 and MAF >5%. Our results provide a quick but detailed index for power considerations of future meta-analyses of GWAS that enables a more flexible design from early steps based on the number of studies accumulated in different groups and the λ values observed in the single studies.

  8. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  9. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  10. Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.

    PubMed

    MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N

    2018-04-25

    Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires further prospective evaluation. Readily available epidemiologic risk factors can be used to predict susceptibility of Gram-negative organisms among patients with bacteraemia, using automated decision-making models. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  11. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    PubMed

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  12. A semi-empirical approach to analyze the activities of cylindrical radioactive samples using gamma energies from 185 to 1764 keV.

    PubMed

    Huy, Ngo Quang; Binh, Do Quang

    2014-12-01

    This work suggests a method for determining the activities of cylindrical radioactive samples. The self-attenuation factor was applied for providing the self-absorption correction of gamma rays in the sample material. The experimental measurement of a (238)U reference sample and the calculation using the MCNP5 code allow obtaining the semi-empirical formulae of detecting efficiencies for the gamma energies ranged from 185 to 1764keV. These formulae were used to determine the activities of the (238)U, (226)Ra, (232)Th, (137)Cs and (40)K nuclides in the IAEA RGU-1, IAEA-434, IAEA RGTh-1, IAEA-152 and IAEA RGK-1 radioactive standards. The coincidence summing corrections for gamma rays in the (238)U and (232)Th series were applied. The activities obtained in this work were in good agreement with the reference values. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. 1/ f noise from the laws of thermodynamics for finite-size fluctuations.

    PubMed

    Chamberlin, Ralph V; Nasir, Derek M

    2014-07-01

    Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.

  14. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    NASA Astrophysics Data System (ADS)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  15. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  16. Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams

    PubMed Central

    Wang, L L W; Perles, L A; Archambault, L; Sahoo, N; Mirkovic, D; Beddar, S

    2013-01-01

    The plastic scintillation detectors (PSD) have many advantages over other detectors in small field dosimetry due to its high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs will undergo a quenching effect which makes the signal level reduced significantly when the detector is close to Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene based PSD (BCF-12, ϕ0.5mm×4mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately. PMID:23128412

  17. Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams

    NASA Astrophysics Data System (ADS)

    Wang, L. L. W.; Perles, L. A.; Archambault, L.; Sahoo, N.; Mirkovic, D.; Beddar, S.

    2012-12-01

    Plastic scintillation detectors (PSDs) have many advantages over other detectors in small field dosimetry due to their high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs undergo a quenching effect which makes the signal level reduced significantly when the detector is close to the Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene-based PSD (BCF-12, ϕ0.5 mm × 4 mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between the QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately.

  18. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; Zhang, Lei; Kang, Qinjun

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  19. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE PAGES

    Chen, Li; Zhang, Lei; Kang, Qinjun; ...

    2015-01-28

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  20. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: permeability and diffusivity

    PubMed Central

    Chen, Li; Zhang, Lei; Kang, Qinjun; Viswanathan, Hari S.; Yao, Jun; Tao, Wenquan

    2015-01-01

    Porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsic permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. For the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed. PMID:25627247

  1. Non performing loans (NPLs) in a crisis economy: Long-run equilibrium analysis with a real time VEC model for Greece (2001-2015)

    NASA Astrophysics Data System (ADS)

    Konstantakis, Konstantinos N.; Michaelides, Panayotis G.; Vouldis, Angelos T.

    2016-06-01

    As a result of domestic and international factors, the Greek economy faced a severe crisis which is directly comparable only to the Great Recession. In this context, a prominent victim of this situation was the country's banking system. This paper attempts to shed light on the determining factors of non-performing loans in the Greek banking sector. The analysis presents empirical evidence from the Greek economy, using aggregate data on a quarterly basis, in the time period 2001-2015, fully capturing the recent recession. In this work, we use a relevant econometric framework based on a real time Vector Autoregressive (VAR)-Vector Error Correction (VEC) model, which captures the dynamic interdependencies among the variables used. Consistent with international evidence, the empirical findings show that both macroeconomic and financial factors have a significant impact on non-performing loans in the country. Meanwhile, the deteriorating credit quality feeds back into the economy leading to a self-reinforcing negative loop.

  2. Analysis of the eight-year trend in ozone depletion from empirical models of solar backscattered ultraviolet instrument degradation

    NASA Technical Reports Server (NTRS)

    Herman, J. R.; Hudson, R. D.; Serafino, G.

    1990-01-01

    Arguments are presented showing that the basic empirical model of the solar backscatter UV (SBUV) instrument degradation used by Cebula et al. (1988) in their analysis of the SBUV data is likely to lead to an incorrect estimate of the ozone trend. A correction factor is given as a function of time and altitude that brings the SBUV data into approximate agreement with the SAGE, SME, and Dobson network ozone trends. It is suggested that the currently archived SBUV ozone data should be used with caution for periods of analysis exceeding 1 yr, since it is likely that the yearly decreases contained in the archived data are too large.

  3. Prediction of light aircraft interior noise

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.; Morales, D. A.

    1976-01-01

    At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.

  4. Rapid correction of electron microprobe data for multicomponent metallic systems

    NASA Technical Reports Server (NTRS)

    Gupta, K. P.; Sivakumar, R.

    1973-01-01

    This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.

  5. Empirical effective temperatures and bolometric corrections for early-type stars

    NASA Technical Reports Server (NTRS)

    Code, A. D.; Bless, R. C.; Davis, J.; Brown, R. H.

    1976-01-01

    An empirical effective temperature for a star can be found by measuring its apparent angular diameter and absolute flux distribution. The angular diameters of 32 bright stars in the spectral range O5f to F8 have recently been measured with the stellar interferometer at Narrabri Observatory, and their absolute flux distributions have been found by combining observations of ultraviolet flux from the Orbiting Astronomical Observatory (OAO-2) with ground-based photometry. In this paper, these data have been combined to derive empirical effective temperatures and bolometric corrections for these 32 stars.

  6. CALIOP Version 3 Data Products: A Comparison to Version 2

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark; Omar, Ali; Hunt, Bill; Getzewich, Brian; Tackett, Jason; Powell, Kathy; Avery, Melody; Kuehn, Ralph; Young, Stuart; Hu, Yong; hide

    2010-01-01

    After launch we discovered that the CALIOP daytime measurements were subject to thermally induced beamdrift,and this caused the calibration to vary by as much as 30% during the course of a single daytime orbit segment. Using an algorithm developed by Powell et al.(2010), empirically derived correction factors are now computed in near realtime as a function of orbit elapsed time, and these are used to compensate for the beam wandering effects.

  7. Subject-Specific Correctness of Students' Conceptions and Factors of Influence: Empirical Findings from a Quantitative Study with Grade 7 Students in Germany Regarding the Formation and Location of Deserts

    ERIC Educational Resources Information Center

    Schubert, Jan Christoph; Wrenger, Katja

    2016-01-01

    Students' conceptions are a central learning condition. Until now there have only been qualitative results regarding the important geographical area of the desert, especially its location and formation. Therefore this study surveys students' conceptions (N = 585; n = 448 without pre-instruction on deserts and n = 137 with pre-instruction on…

  8. Characterization of HPGe gamma spectrometric detectors systems for Instrumental Neutron Activation Analysis (INAA) at the Colombian Geological Survey

    NASA Astrophysics Data System (ADS)

    Sierra, O.; Parrado, G.; Cañón, Y.; Porras, A.; Alonso, D.; Herrera, D. C.; Peña, M.; Orozco, J.

    2016-07-01

    This paper presents the progress made by the Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey (SGC in its Spanish acronym), towards the characterization of its gamma spectrometric systems for Instrumental Neutron Activation Analysis (INAA), with the aim of introducing corrections to the measurements by variations in sample geometry. Characterization includes the empirical determination of the interaction point of gamma radiation inside the Germanium crystal, through the application of a linear model and the use of a fast Monte Carlo N-Particle (MCNP) software to estimate correction factors for differences in counting efficiency that arise from variations in sample density between samples and standards.

  9. More sound of church bells: Authors' correction

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kasper, Lutz; Burde, Jan-Philipp

    2016-01-01

    In the recently published article "The Sound of Church Bells: Tracking Down the Secret of a Traditional Arts and Crafts Trade," the bell frequencies have been erroneously oversimplified. The problem affects Eqs. (2) and (3), which were derived from the elementary "coffee mug model" and in which we used the speed of sound in air. However, this does not make sense from a physical point of view, since air only acts as a sound carrier, not as a sound source in the case of bells. Due to the excellent fit of the theoretical model with the empirical data, we unfortunately failed to notice this error before publication. However, all other equations, e.g., the introduction of the correction factor in Eq. (4) and the estimation of the mass in Eqs. (5) and (6) are not affected by this error, since they represent empirical models. However, it is unfortunate to introduce the speed of sound in air as a constant in Eqs. (4) and (6). Instead, we suggest the following simple rule of thumb for relating the radius of a church bell R to its humming frequency fhum:

  10. An empirical determination of the effects of sea state bias on Seasat altimetry

    NASA Technical Reports Server (NTRS)

    Born, G. H.; Richards, M. A.; Rosborough, G. W.

    1982-01-01

    A linear empirical model has been developed for the correction of sea state bias effects, in Seasat altimetry data altitude measurements, that are due to (1) electromagnetic bias caused by the fact that ocean wave troughs reflect the altimeter signal more strongly than the crests, shifting the apparent mean sea level toward the wave troughs, and (2) an independent instrument-related bias resulting from the inability of height corrections applied in the ground processor to compensate for simplifying assumptions made for the processor aboard Seasat. After applying appropriate corrections to the altimetry data, an empirical model for the sea state bias is obtained by differencing significant wave height and height measurements from coincident ground tracks. Height differences are minimized by solving for the coefficient of a linear relationship between height differences and wave height differences that minimize the height differences. In more than 50% of the 36 cases examined, 7% of the value of significant wave height should be subtracted for sea state bias correction.

  11. Quantification of γ-aminobutyric acid (GABA) in 1 H MRS volumes composed heterogeneously of grey and white matter.

    PubMed

    Mikkelsen, Mark; Singh, Krish D; Brealy, Jennifer A; Linden, David E J; Evans, C John

    2016-11-01

    The quantification of γ-aminobutyric acid (GABA) concentration using localised MRS suffers from partial volume effects related to differences in the intrinsic concentration of GABA in grey (GM) and white (WM) matter. These differences can be represented as a ratio between intrinsic GABA in GM and WM: r M . Individual differences in GM tissue volume can therefore potentially drive apparent concentration differences. Here, a quantification method that corrects for these effects is formulated and empirically validated. Quantification using tissue water as an internal concentration reference has been described previously. Partial volume effects attributed to r M can be accounted for by incorporating into this established method an additional multiplicative correction factor based on measured or literature values of r M weighted by the proportion of GM and WM within tissue-segmented MRS volumes. Simulations were performed to test the sensitivity of this correction using different assumptions of r M taken from previous studies. The tissue correction method was then validated by applying it to an independent dataset of in vivo GABA measurements using an empirically measured value of r M . It was shown that incorrect assumptions of r M can lead to overcorrection and inflation of GABA concentration measurements quantified in volumes composed predominantly of WM. For the independent dataset, GABA concentration was linearly related to GM tissue volume when only the water signal was corrected for partial volume effects. Performing a full correction that additionally accounts for partial volume effects ascribed to r M successfully removed this dependence. With an appropriate assumption of the ratio of intrinsic GABA concentration in GM and WM, GABA measurements can be corrected for partial volume effects, potentially leading to a reduction in between-participant variance, increased power in statistical tests and better discriminability of true effects. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Radiological responses of different types of Egyptian Mediterranean coastal sediments

    NASA Astrophysics Data System (ADS)

    El-Gamal, A.; Rashad, M.; Ghatass, Z.

    2010-08-01

    The aim of this study was to identify gamma self-absorption correction factors for different types of Egyptian Mediterranean coastal sediments. Self-absorption corrections based on direct transmission through different thicknesses of the most dominant sediment species have been tested against point sources with gamma-ray energies of 241Am, 137Cs and 60Co with 2% uncertainties. Black sand samples from the Rashid branch of the Nile River quantitatively absorbed the low energy of 241Am through a thickness of 5 cm. In decreasing order of gamma energy self-absorption of 241Am, the samples under investigation ranked black sand, Matrouh sand, Sidi Gaber sand, shells, Salloum sand, and clay. Empirical self-absorption correction formulas were also deduced. Chemical analyses such as pH, CaCO 3, total dissolved solids, Ca 2+, Mg 2+, CO 32-, HCO 3- and total Fe 2+ have been carried out for the sediments. The relationships between self absorption corrections and the other chemical parameters of the sediments were also examined.

  13. PTW-diamond detector: dose rate and particle type dependence.

    PubMed

    Fidanzio, A; Azario, L; Miceli, R; Russo, A; Piermattei, A

    2000-11-01

    In this paper the suitability of a PTW natural diamond detector (DD) for relative and reference dosimetry of photon and electron beams, with dose per pulse between 0.068 mGy and 0.472 mGy, was studied and the results were compared with those obtained by a stereotactic silicon detector (SFD). The results show that, in the range of the examined dose per pulse the DD sensitivity changes up to 1.8% while the SFD sensitivity changes up to 4.5%. The fitting parameter, delta, used to correct the dose per pulse dependence of solid state detectors, was delta = 0.993 +/- 0.002 and delta = 1.025 +/- 0.002 for the diamond detector and for the silicon diode, respectively. The delta values were found to be independent of particle type of two conventional beams (a 10 MV x-ray beam and a 21 MeV electron beam). So if delta is determined for a radiotherapy beam, it can be used to correct relative dosimetry for other conventional radiotherapy beams. Moreover the diamond detector shows a calibration factor which is independent of beam quality and particle type, so an empirical dosimetric formalism is proposed here to obtain the reference dosimetry. This formalism is based on a dose-to-water calibration factor and on an empirical coefficient, that takes into account the reading dependence on the dose per pulse.

  14. [Determination of ventricular volumes by a non-geometric method using gamma-cineangiography].

    PubMed

    Faivre, R; Cardot, J C; Baud, M; Verdenet, J; Berthout, P; Bidet, A C; Bassand, J P; Maurat, J P

    1985-08-01

    The authors suggest a new way of determining ventricular volume by a non-geometric method using gamma-cineangiography. The results obtained by this method were compared with those obtained by a geometric methods and contrast ventriculography in 94 patients. The new non-geometric method supposes that the radioactive tracer is evenly distributed in the cardiovascular system so that blood radioactivity levels can be measured. The ventricular volume is then equal to the ratio of radioactivity in the LV zone to that of 1 ml of blood. Comparison of the radionuclide and angiographic data in the first 60 patients showed systematic values--despite a satisfactory statistical correlation (r = 0.87, y = 0.30 X + 6.3). This underestimation is due to the phenomenon of attenuation related to the depth of the heart in the thoracic cage and to autoabsorption at source, the degree of which depends on the ventricular volume. An empirical method of calculation allows correction for these factors by taking into account absorption in the tissues by relating to body surface area and autoabsorption at source by correcting for the surface of isotopic ventricular projection expressed in pixels. Using the data of this empirical method, the correction formula for radionuclide ventricular volume is obtained by a multiple linear regression: corrected radionuclide volume = K X measured radionuclide volume (Formula: see text). This formula was applied in the following 34 patients. The correlation between the uncorrected and corrected radionuclide volumes and the angiographic volumes was improved (r = 0.65 vs r = 0.94) and the values were more accurate (y = 0.18 X + 26 vs y = 0.96 X + 1.5).(ABSTRACT TRUNCATED AT 250 WORDS)

  15. The "Residential" Effect Fallacy in Neighborhood and Health Studies: Formal Definition, Empirical Identification, and Correction.

    PubMed

    Chaix, Basile; Duncan, Dustin; Vallée, Julie; Vernez-Moudon, Anne; Benmarhnia, Tarik; Kestens, Yan

    2017-11-01

    Because of confounding from the urban/rural and socioeconomic organizations of territories and resulting correlation between residential and nonresidential exposures, classically estimated residential neighborhood-outcome associations capture nonresidential environment effects, overestimating residential intervention effects. Our study diagnosed and corrected this "residential" effect fallacy bias applicable to a large fraction of neighborhood and health studies. Our empirical application investigated the effect that hypothetical interventions raising the residential number of services would have on the probability that a trip is walked. Using global positioning systems tracking and mobility surveys over 7 days (227 participants and 7440 trips), we employed a multilevel linear probability model to estimate the trip-level association between residential number of services and walking to derive a naïve intervention effect estimate and a corrected model accounting for numbers of services at the residence, trip origin, and trip destination to determine a corrected intervention effect estimate (true effect conditional on assumptions). There was a strong correlation in service densities between the residential neighborhood and nonresidential places. From the naïve model, hypothetical interventions raising the residential number of services to 200, 500, and 1000 were associated with an increase by 0.020, 0.055, and 0.109 of the probability of walking in the intervention groups. Corrected estimates were of 0.007, 0.019, and 0.039. Thus, naïve estimates were overestimated by multiplicative factors of 3.0, 2.9, and 2.8. Commonly estimated residential intervention-outcome associations substantially overestimate true effects. Our somewhat paradoxical conclusion is that to estimate residential effects, investigators critically need information on nonresidential places visited.

  16. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    NASA Astrophysics Data System (ADS)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  17. Development of a Geomagnetic Storm Correction to the International Reference Ionosphere E-Region Electron Densities Using TIMED/SABER Observations

    NASA Technical Reports Server (NTRS)

    Mertens, C. J.; Xu, X.; Fernandez, J. R.; Bilitza, D.; Russell, J. M., III; Mlynczak, M. G.

    2009-01-01

    Auroral infrared emission observed from the TIMED/SABER broadband 4.3 micron channel is used to develop an empirical geomagnetic storm correction to the International Reference Ionosphere (IRI) E-region electron densities. The observation-based proxy used to develop the storm model is SABER-derived NO+(v) 4.3 micron volume emission rates (VER). A correction factor is defined as the ratio of storm-time NO+(v) 4.3 micron VER to a quiet-time climatological averaged NO+(v) 4.3 micron VER, which is linearly fit to available geomagnetic activity indices. The initial version of the E-region storm model, called STORM-E, is most applicable within the auroral oval region. The STORM-E predictions of E-region electron densities are compared to incoherent scatter radar electron density measurements during the Halloween 2003 storm events. Future STORM-E updates will extend the model outside the auroral oval.

  18. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  19. Characterization of HPGe gamma spectrometric detectors systems for Instrumental Neutron Activation Analysis (INAA) at the Colombian Geological Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra, O., E-mail: osierra@sgc.gov.co; Parrado, G., E-mail: gparrado@sgc.gov.co; Cañón, Y.

    This paper presents the progress made by the Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey (SGC in its Spanish acronym), towards the characterization of its gamma spectrometric systems for Instrumental Neutron Activation Analysis (INAA), with the aim of introducing corrections to the measurements by variations in sample geometry. Characterization includes the empirical determination of the interaction point of gamma radiation inside the Germanium crystal, through the application of a linear model and the use of a fast Monte Carlo N-Particle (MCNP) software to estimate correction factors for differences in counting efficiency that arise from variations in samplemore » density between samples and standards.« less

  20. The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate

    ERIC Educational Resources Information Center

    Polio, Charlene

    2012-01-01

    The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…

  1. Characterization of Artifacts Introduced by the Empirical Volcano-Scan Atmospheric Correction Commonly Applied to CRISM and OMEGA Near-Infrared Spectra

    NASA Technical Reports Server (NTRS)

    Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; hide

    2014-01-01

    The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  2. Characterization of artifacts introduced by the empirical volcano-scan atmospheric correction commonly applied to CRISM and OMEGA near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.

    2016-05-01

    The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  3. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    ERIC Educational Resources Information Center

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  4. Using Mason number to predict MR damper performance from limited test data

    NASA Astrophysics Data System (ADS)

    Becnel, Andrew C.; Wereley, Norman M.

    2017-05-01

    The Mason number can be used to produce a single master curve which relates MR fluid stress versus strain rate behavior across a wide range of shear rates, temperatures, and applied magnetic fields. As applications of MR fluid energy absorbers expand to a variety of industries and operating environments, Mason number analysis offers a path to designing devices with desired performance from a minimal set of preliminary test data. Temperature strongly affects the off-state viscosity of the fluid, as the passive viscous force drops considerably at higher temperatures. Yield stress is not similarly affected, and stays relatively constant with changing temperature. In this study, a small model-scale MR fluid rotary energy absorber is used to measure the temperature correction factor of a commercially-available MR fluid from LORD Corporation. This temperature correction factor is identified from shear stress vs. shear rate data collected at four different temperatures. Measurements of the MR fluid yield stress are also obtained and related to a standard empirical formula. From these two MR fluid properties - temperature-dependent viscosity and yield stress - the temperature-corrected Mason number is shown to predict the force vs. velocity performance of a full-scale rotary MR fluid energy absorber. This analysis technique expands the design space of MR devices to high shear rates and allows for comprehensive predictions of overall performance across a wide range of operating conditions from knowledge only of the yield stress vs. applied magnetic field and a temperature-dependent viscosity correction factor.

  5. Nonlinear Local Bending Response and Bulging Factors for Longitudinal and Circumferential Cracks in Pressurized Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Young, Richard D.; Rose, Cheryl A.; Starnes, James H., Jr.

    2000-01-01

    Results of a geometrically nonlinear finite element parametric study to determine curvature correction factors or bulging factors that account for increased stresses due to curvature for longitudinal and circumferential cracks in unstiffened pressurized cylindrical shells are presented. Geometric parameters varied in the study include the shell radius, the shell wall thickness, and the crack length. The major results are presented in the form of contour plots of the bulging factor as a function of two nondimensional parameters: the shell curvature parameter, lambda, which is a function of the shell geometry, Poisson's ratio, and the crack length; and a loading parameter, eta, which is a function of the shell geometry, material properties, and the applied internal pressure. These plots identify the ranges of the shell curvature and loading parameters for which the effects of geometric nonlinearity are significant. Simple empirical expressions for the bulging factor are then derived from the numerical results and shown to predict accurately the nonlinear response of shells with longitudinal and circumferential cracks. The numerical results are also compared with analytical solutions based on linear shallow shell theory for thin shells, and with some other semi-empirical solutions from the literature, and limitations on the use of these other expressions are suggested.

  6. Statistical analysis of geomagnetic field intensity differences between ASM and VFM instruments onboard Swarm constellation

    NASA Astrophysics Data System (ADS)

    De Michelis, Paola; Tozzi, Roberta; Consolini, Giuseppe

    2017-02-01

    From the very first measurements made by the magnetometers onboard Swarm satellites launched by European Space Agency (ESA) in late 2013, it emerged a discrepancy between scalar and vector measurements. An accurate analysis of this phenomenon brought to build an empirical model of the disturbance, highly correlated with the Sun incidence angle, and to correct vector data accordingly. The empirical model adopted by ESA results in a significant decrease in the amplitude of the disturbance affecting VFM measurements so greatly improving the vector magnetic data quality. This study is focused on the characterization of the difference between magnetic field intensity measured by the absolute scalar magnetometer (ASM) and that reconstructed using the vector field magnetometer (VFM) installed on Swarm constellation. Applying empirical mode decomposition method, we find the intrinsic mode functions (IMFs) associated with ASM-VFM total intensity differences obtained with data both uncorrected and corrected for the disturbance correlated with the Sun incidence angle. Surprisingly, no differences are found in the nature of the IMFs embedded in the analyzed signals, being these IMFs characterized by the same dominant periodicities before and after correction. The effect of correction manifests in the decrease in the energy associated with some IMFs contributing to corrected data. Some IMFs identified by analyzing the ASM-VFM intensity discrepancy are characterized by the same dominant periodicities of those obtained by analyzing the temperature fluctuations of the VFM electronic unit. Thus, the disturbance correlated with the Sun incidence angle could be still present in the corrected magnetic data. Furthermore, the ASM-VFM total intensity difference and the VFM electronic unit temperature display a maximal shared information with a time delay that depends on local time. Taken together, these findings may help to relate the features of the observed VFM-ASM total intensity difference to the physical characteristics of the real disturbance thus contributing to improve the empirical model proposed for the correction of data.[Figure not available: see fulltext.

  7. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  8. Spectroscopic properties of nuclear skyrme energy density functionals.

    PubMed

    Tarpanov, D; Dobaczewski, J; Toivanen, J; Carlsson, B G

    2014-12-19

    We address the question of how to improve the agreement between theoretical nuclear single-particle energies (SPEs) and observations. Empirically, in doubly magic nuclei, the SPEs can be deduced from spectroscopic properties of odd nuclei that have one more or one less neutron or proton. Theoretically, bare SPEs, before being confronted with observations, must be corrected for the effects of the particle vibration coupling (PVC). In the present work, we determine the PVC corrections in a fully self-consistent way. Then, we adjust the SPEs, with PVC corrections included, to empirical data. In this way, the agreement with observations, on average, improves; nevertheless, large discrepancies still remain. We conclude that the main source of disagreement is still in the underlying mean fields, and not in including or neglecting the PVC corrections.

  9. Comparing State SAT Scores: Problems, Biases, and Corrections.

    ERIC Educational Resources Information Center

    Gohmann, Stephen F.

    1988-01-01

    One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)

  10. Is Directivity Still Effective in a PSHA Framework?

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2008-12-01

    Source rupture parameters, like directivity, modulate the energy release causing variations in the radiated signal amplitude. Thus they affect the empirical predictive equations and as a consequence the seismic hazard assessment. Classical probabilistic hazard evaluations, e.g. Cornell (1968), use very simple predictive equations only based on magnitude and distance which do not account for variables concerning the rupture process. However nowadays, a few predictive equations (e.g. Somerville 1997, Spudich and Chiou 2008) take into account for rupture directivity. Also few implementations have been made in a PSHA framework (e.g. Convertito et al. 2006, Rowshandel 2006). In practice, these new empirical predictive models incorporate quantitatively the rupture propagation effects through the introduction of variables like rake, azimuth, rupture velocity and laterality. The contribution of all these variables is summarized in corrective factors derived from measuring differences between the real data and the predicted ones Therefore, it's possible to keep the older computation, making use of a simple predictive model, and besides, to incorporate the directivity effect through the corrective factors. Any single supplementary variable meaning a new integral in the parametric space. However the difficulty consists of the constraints on parameter distribution functions. We present the preliminary result for ad hoc distributions (Gaussian, uniform distributions) in order to test the impact of incorporating directivity into PSHA models. We demonstrate that incorporating directivity in PSHA by means of the new predictive equations may lead to strong percentage variations in the hazard assessment.

  11. Correcting coils in end magnets of accelerators

    NASA Astrophysics Data System (ADS)

    Kassab, L. R.; Gouffon, P.

    1998-05-01

    We present an empirical investigation of the correcting coils behavior used to homogenize the field distribution of the race-track microtron accelerator end magnets. These end magnets belong to the second stage of the 30.0 MeV cw electron accelerator under construction at IFUSP, the race-track microtron booster, in which the beam energy is raised from 1.97 to 5.1 MeV. The correcting coils are attached to the pole faces and are based on the inhomogeneities of the magnetic field measured. The performance of these coils, when operating the end magnets with currents that differ by +/-10% from the one used in the mappings that originated the coils copper leads, is presented. For one of the magnets, adjusting conveniently the current of the correcting coils makes it possible to homogenize field distributions of different intensities, once their shapes are practically identical to those that originated the coils. For the other one, the shapes are changed and the coils are less efficient. This is related to intrinsic factors that determine the inhomogeneities. However, we obtained uniformity of 0.001% in both cases.

  12. High accuracy measurements of dry mole fractions of carbon dioxide and methane in humid air

    NASA Astrophysics Data System (ADS)

    Rella, C. W.; Chen, H.; Andrews, A. E.; Filges, A.; Gerbig, C.; Hatakka, J.; Karion, A.; Miles, N. L.; Richardson, S. J.; Steinbacher, M.; Sweeney, C.; Wastine, B.; Zellweger, C.

    2012-08-01

    Traditional techniques for measuring the mole fractions of greenhouse gas in the well-mixed atmosphere have required extremely dry sample gas streams (dew point < -25 °C) to achieve the inter-laboratory compatibility goals set forth by the Global Atmospheric Watch program of the World Meteorological Organization (WMO/GAW) for carbon dioxide (±0.1 ppm) and methane (±2 ppb). Drying the sample gas to low levels of water vapor can be expensive, time-consuming, and/or problematic, especially at remote sites where access is difficult. Recent advances in optical measurement techniques, in particular Cavity Ring Down Spectroscopy (CRDS), have led to the development of highly stable and precise greenhouse gas analyzers capable of highly accurate measurements of carbon dioxide, methane, and water vapor. Unlike many older technologies, which can suffer from significant uncorrected interference from water vapor, these instruments permit for the first time accurate and precise greenhouse gas measurements that can meet the WMO/GAW inter-laboratory compatibility goals without drying the sample gas. In this paper, we present laboratory methodology for empirically deriving the water vapor correction factors, and we summarize a series of in-situ validation experiments comparing the measurements in humid gas streams to well-characterized dry-gas measurements. By using the manufacturer-supplied correction factors, the dry-mole fraction measurements have been demonstrated to be well within the GAW compatibility goals up to at least 1% water vapor. By determining the correction factors for individual instruments once at the start of life, this range can be extended to at least 2% over the life of the instrument, and if the correction factors are determined periodically over time, the evidence suggests that this range can be extended above 4%.

  13. Is the PTW 60019 microDiamond a suitable candidate for small field reference dosimetry?

    NASA Astrophysics Data System (ADS)

    De Coste, Vanessa; Francescon, Paolo; Marinelli, Marco; Masi, Laura; Paganini, Lucia; Pimpinella, Maria; Prestopino, Giuseppe; Russo, Serenella; Stravato, Antonella; Verona, Claudio; Verona-Rinati, Gianluca

    2017-09-01

    A systematic study of the PTW microDiamond (MD) output factors (OF) is reported, aimed at clarifying its response in small fields and investigating its suitability for small field reference dosimetry. Ten MDs were calibrated under 60Co irradiation. OF measurements were performed in 6 MV photon beams by a CyberKnife M6, a Varian DHX and an Elekta Synergy linacs. Two PTW silicon diodes E (Si-D) were used for comparison. The results obtained by the MDs were evaluated in terms of absorbed dose to water determination in reference conditions and OF measurements, and compared to the results reported in the recent literature. To this purpose, the Monte Carlo (MC) beam-quality correction factor, kQMD , was calculated for the MD, and the small field output correction factors, k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} , were calculated for both the MD and the Si-D by two different research groups. An empirical function was also derived, providing output correction factors within 0.5% from the MC values calculated for all of the three linacs. A high reproducibility of the dosimetric properties was observed among the ten MDs. The experimental kQMD values are in agreement within 1% with the MC calculated ones. Output correction factors within  +0.7% and  -1.4% were obtained down to field sizes as narrow as 5 mm. The resulting MD and Si-D field factors are in agreement within 0.2% in the case of CyberKnife measurements and 1.6% in the other cases. This latter higher spread of the data was demonstrated to be due to a lower reproducibility of small beam sizes defined by jaws or multi leaf collimators. The results of the present study demonstrate the reproducibility of the MD response and provide a validation of the MC modelling of this device. In principle, accurate reference dosimetry is thus feasible by using the microDiamond dosimeter for field sizes down to 5 mm.

  14. Study to assess the importance of errors introduced by applying NOAA 6 and NOAA 7 AVHRR data as an estimator of vegetative vigor: Feasibility study of data normalization

    NASA Technical Reports Server (NTRS)

    Duggin, M. J. (Principal Investigator); Piwinski, D.

    1982-01-01

    The use of NOAA AVHRR data to map and monitor vegetation types and conditions in near real-time can be enhanced by using a portion of each GAC image that is larger than the central 25% now considered. Enlargement of the cloud free image data set can permit development of a series of algorithms for correcting imagery for ground reflectance and for atmospheric scattering anisotropy within certain accuracy limits. Empirical correction algorithms used to normalize digital radiance or VIN data must contain factors for growth stage and for instrument spectral response. While it is not possible to correct for random fluctuations in target radiance, it is possible to estimate the necessary radiance difference between targets in order to provide target discrimination and quantification within predetermined limits of accuracy. A major difficulty lies in the lack of documentation of preprocessing algorithms used on AVHRR digital data.

  15. Fast Solar Wind from Slowly Expanding Magnetic Flux Tubes (P54)

    NASA Astrophysics Data System (ADS)

    Srivastava, A. K.; Dwivedi, B. N.

    2006-11-01

    aks.astro.itbhu@gmail.com We present an empirical model of the fast solar wind, emanating from radially oriented slowly expanding magnetic flux tubes. We consider a single-fluid, steady state model in which the flow is driven by thermal and non-thermal pressure gradients. We apply a non-Alfvénic energy correction at the coronal base and find that specific relations correlate solar wind speed and non-thermal energy flux with the aerial expansion factor. The results are compared with the previously reported ones.

  16. Prediction of the production of nitrogen oxide (NOx) in turbojet engines

    NASA Astrophysics Data System (ADS)

    Tsague, Louis; Tsogo, Joseph; Tatietse, Thomas Tamo

    Gaseous nitrogen oxides (NO+NO2=NOx) are known as atmospheric trace constituent. These gases remain a big concern despite the advances in low NOx emission technology because they play a critical role in regulating the oxidization capacity of the atmosphere according to Crutzen [1995. My life with O 3, NO x and other YZO x S; Nobel Lecture; Chemistry 1995; pp 195; December 8, 1995] . Aircraft emissions of nitrogen oxides ( NOx) are regulated by the International Civil Aviation Organization. The prediction of NOx emission in turbojet engines by combining combustion operational data produced information showing correlation between the analytical and empirical results. There is close similarity between the calculated emission index and experimental data. The correlation shows improved accuracy when the 2124 experimental data from 11 gas turbine engines are evaluated than a previous semi empirical correlation approach proposed by Pearce et al. [1993. The prediction of thermal NOx in gas turbine exhausts. Eleventh International Symposium on Air Breathing Engines, Tokyo, 1993, pp. 6-9]. The new method we propose predict the production of NOx with far more improved accuracy than previous methods. Since a turbojet engine works in an atmosphere where temperature, pressure and humidity change frequently, a correction factor is developed with standard atmospheric laws and some correlations taken from scientific literature [Swartwelder, M., 2000. Aerospace engineering 410 Term Project performance analysis, November 17, 2000, pp. 2-5; Reed, J.A. Java Gas Turbine Simulator Documentation. pp. 4-5]. The new correction factor is validated with experimental observations from 19 turbojet engines cruising at altitudes of 9 and 13 km given in the ICAO repertory [Middleton, D., 1992. Appendix K (FAA/SETA). Section 1: Boeing Method Two Indices, 1992, pp. 2-3]. This correction factor will enable the prediction of cruise NOx emissions of turbojet engines at cruising speeds. The ICAO database [Goehlich, R.A., 2000. Investigation into the applicability of pollutant emission models for computer aided preliminary aircraft design, Book number 175654, 4.2.2000, pp. 57-79] can now be completed using the approach we propose to complete the whole mission flight NOx emissions.

  17. Semiempirical evaluation of post-Hartree-Fock diagonal-Born-Oppenheimer corrections for organic molecules.

    PubMed

    Mohallem, José R

    2008-04-14

    Recent post-Hartree-Fock calculations of the diagonal-Born-Oppenheimer correction empirically show that it behaves quite similar to atomic nuclear mass corrections. An almost constant contribution per electron is identified, which converges with system size for specific series of organic molecules. This feature permits pocket-calculator evaluation of the corrections within thermochemical accuracy (10(-1) mhartree or kcal/mol).

  18. Global validation of empirically corrected EP-Total Ozone Mapping Spectrometer (TOMS) total ozone columns using Brewer and Dobson ground-based measurements

    NASA Astrophysics Data System (ADS)

    Antón, M.; Koukouli, M. E.; Kroon, M.; McPeters, R. D.; Labow, G. J.; Balis, D.; Serrano, A.

    2010-10-01

    This article focuses on the global-scale validation of the empirically corrected Version 8 total ozone column data set acquired by the NASA Total Ozone Mapping Spectrometer (TOMS) during the period 1996-2004 when this instrument was flying aboard the Earth Probe (EP) satellite platform. This analysis is based on the use of spatially co-located, ground-based measurements from Dobson and Brewer spectrophotometers. The original EP-TOMS V8 total ozone column data set was also validated with these ground-based measurements to quantify the improvements made by the empirical correction that was necessary as a result of instrumental degradation issues occurring from the year 2000 onward that were uncorrectable by normal calibration techniques. EP-TOMS V8-corrected total ozone data present a remarkable improvement concerning the significant negative bias of around ˜3% detected in the original EP-TOMS V8 observations after the year 2000. Neither the original nor the corrected EP-TOMS satellite total ozone data sets show a significant dependence on latitude. In addition, both EP-TOMS satellite data sets overestimate the Brewer measurements for small solar zenith angles (SZA) and underestimate for large SZA, explaining a significant seasonality (˜1.5%) for cloud-free and cloudy conditions. Conversely, relative differences between EP-TOMS and Dobson present almost no dependence on SZA for cloud-free conditions and a strong dependence for cloudy conditions (from +2% for small SZA to -1% for high SZA). The dependence of the satellite ground-based relative differences on total ozone shows good agreement for column values above 250 Dobson units. Our main conclusion is that the upgrade to TOMS V8-corrected total ozone data presents a remarkable improvement. Nevertheless, despite its quality, the EP-TOMS data for the period 2000-2004 should not be used as a source for trend analysis since EP-TOMS ozone trends are empirically corrected using NOAA-16 and NOAA-17 solar backscatter ultraviolet/2 data as external references, and therefore, they are no longer considered as independent observations.

  19. Qualifying the benefit of Advanced Traveler Information Systems (ATIS)

    DOT National Transportation Integrated Search

    2000-11-21

    ATIS Yields Time Management Benefits: No conflict between survey and empirical research : ATIS users correctly perceive that they save time Field studies correctly measured only small changes in in-vehicle travel times. When travel behavior f...

  20. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms are discussed especially for fiber-reinforced composites. PMID:25620817

  1. A research review of quality assessment for software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness.

  2. Peak ground motion predictions with empirical site factors using Taiwan Strong Motion Network recordings

    NASA Astrophysics Data System (ADS)

    Chung, Jen-Kuang

    2013-09-01

    A stochastic method called the random vibration theory (Boore, 1983) has been used to estimate the peak ground motions caused by shallow moderate-to-large earthquakes in the Taiwan area. Adopting Brune's ω-square source spectrum, attenuation models for PGA and PGV were derived from path-dependent parameters which were empirically modeled from about one thousand accelerograms recorded at reference sites mostly located in a mountain area and which have been recognized as rock sites without soil amplification. Consequently, the predicted horizontal peak ground motions at the reference sites, are generally comparable to these observed. A total number of 11,915 accelerograms recorded from 735 free-field stations of the Taiwan Strong Motion Network (TSMN) were used to estimate the site factors by taking the motions from the predictive models as references. Results from soil sites reveal site amplification factors of approximately 2.0 ~ 3.5 for PGA and about 1.3 ~ 2.6 for PGV. Finally, as a result of amplitude corrections with those empirical site factors, about 75% of analyzed earthquakes are well constrained in ground motion predictions, having average misfits ranging from 0.30 to 0.50. In addition, two simple indices, R 0.57 and R 0.38, are proposed in this study to evaluate the validity of intensity map prediction for public information reports. The average percentages of qualified stations for peak acceleration residuals less than R 0.57 and R 0.38 can reach 75% and 54%, respectively, for most earthquakes. Such a performance would be good enough to produce a faithful intensity map for a moderate scenario event in the Taiwan region.

  3. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  4. Response functions for neutron skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-02-01

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less

  5. Caring potentials in the shadows of power, correction, and discipline—Forensic psychiatric care in the light of the work of Michel Foucault

    PubMed Central

    Hörberg, Ulrica; Dahlberg, Karin

    2015-01-01

    The aim of this article is to shed light on contemporary forensic psychiatric care through a philosophical examination of the empirical results from two lifeworld phenomenological studies from the perspective of patients and carers, by using the French philosopher Michel Foucault's historical–philosophical work. Both empirical studies were conducted in a forensic psychiatric setting. The essential results of the two empirical studies were reexamined in a phenomenological meaning analysis to form a new general structure in accordance with the methodological principles of Reflective Lifeworld Research. This general structure shows how the caring on the forensic psychiatric wards appears to be contradictory, in that it is characterized by an unreflective (non-)caring attitude and contributes to an inconsistent and insecure existence. The caring appears to have a corrective approach and thus lacks a clear caring structure, a basic caring approach that patients in forensic psychiatric services have a great need of. To gain a greater understanding of forensic psychiatric caring, the new empirical results were further examined in the light of Foucault's historical–philosophical work. The philosophical examination is presented in terms of the three meaning constituents: Caring as correction and discipline, The existence of power, and Structures and culture in care. The philosophical examination illustrates new meaning nuances of the corrective and disciplinary nature of forensic psychiatric care, its power, and how this is materialized in caring, and what this does to the patients. The examination reveals embedded difficulties in forensic psychiatric care and highlights a need to revisit the aim of such care. PMID:26319100

  6. Caring potentials in the shadows of power, correction, and discipline - Forensic psychiatric care in the light of the work of Michel Foucault.

    PubMed

    Hörberg, Ulrica; Dahlberg, Karin

    2015-01-01

    The aim of this article is to shed light on contemporary forensic psychiatric care through a philosophical examination of the empirical results from two lifeworld phenomenological studies from the perspective of patients and carers, by using the French philosopher Michel Foucault's historical-philosophical work. Both empirical studies were conducted in a forensic psychiatric setting. The essential results of the two empirical studies were reexamined in a phenomenological meaning analysis to form a new general structure in accordance with the methodological principles of Reflective Lifeworld Research. This general structure shows how the caring on the forensic psychiatric wards appears to be contradictory, in that it is characterized by an unreflective (non-)caring attitude and contributes to an inconsistent and insecure existence. The caring appears to have a corrective approach and thus lacks a clear caring structure, a basic caring approach that patients in forensic psychiatric services have a great need of. To gain a greater understanding of forensic psychiatric caring, the new empirical results were further examined in the light of Foucault's historical-philosophical work. The philosophical examination is presented in terms of the three meaning constituents: Caring as correction and discipline, The existence of power, and Structures and culture in care. The philosophical examination illustrates new meaning nuances of the corrective and disciplinary nature of forensic psychiatric care, its power, and how this is materialized in caring, and what this does to the patients. The examination reveals embedded difficulties in forensic psychiatric care and highlights a need to revisit the aim of such care.

  7. Polygenic signal for symptom dimensions and cognitive performance in patients with chronic schizophrenia.

    PubMed

    Xavier, Rose Mary; Dungan, Jennifer R; Keefe, Richard S E; Vorderstrasse, Allison

    2018-06-01

    Genetic etiology of psychopathology symptoms and cognitive performance in schizophrenia is supported by candidate gene and polygenic risk score (PRS) association studies. Such associations are reported to be dependent on several factors - sample characteristics, illness phase, illness severity etc. We aimed to examine if schizophrenia PRS predicted psychopathology symptoms and cognitive performance in patients with chronic schizophrenia. We also examined if schizophrenia associated autosomal loci were associated with specific symptoms or cognitive domains. Case-only analysis using data from the Clinical Antipsychotics Trials of Intervention Effectiveness-Schizophrenia trials ( n  = 730). PRS was constructed using Psychiatric Genomics Consortium (PGC) leave one out genome wide association analysis as the discovery data set. For candidate region analysis, we selected 105-schizophrenia associated autosomal loci from the PGC study. We found a significant effect of PRS on positive symptoms at p -threshold ( P T ) of 0.5 ( R 2  = 0.007, p  = 0.029, empirical p  = 0.029) and negative symptoms at P T of 1e-07 ( R 2  = 0.005, p  = 0.047, empirical p  = 0.048). For models that additionally controlled for neurocognition, best fit PRS predicted positive ( p- threshold 0.01, R 2   =  0.007, p =  0.013, empirical p  = 0.167) and negative symptoms ( p- threshold 0.1, R 2   =  0.012, p =  0.004, empirical p  = 0.329). No associations were seen for overall neurocognitive and social cognitive performance tests. Post-hoc analyses revealed that PRS predicted working memory and vigilance performance but did not survive correction. No candidate regions that survived multiple testing corrections were associated with either symptoms or cognitive performance. Our findings point to potentially distinct pathogenic mechanisms for schizophrenia symptoms.

  8. Stress-Strain Behavior of Cementitious Materials with Different Sizes

    PubMed Central

    Zhou, Jikai; Qian, Pingping; Chen, Xudong

    2014-01-01

    The size dependence of flexural properties of cement mortar and concrete beams is investigated. Bazant's size effect law and modified size effect law by Kim and Eo give a very good fit to the flexural strength of both cement mortar and concrete. As observed in the test results, a strong size effect in flexural strength is found in cement mortar than in concrete. A modification has been suggested to Li's equation for describing the stress-strain curve of cement mortar and concrete by incorporating two different correction factors, the factors contained in the modified equation being established empirically as a function of specimen size. A comparison of the predictions of this equation with test data generated in this study shows good agreement. PMID:24744688

  9. Automation bias: empirical results assessing influencing factors.

    PubMed

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  11. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    PubMed

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  12. [Etiology and antimicrobial resistance profile of urinary tract infection in children, Valdivia 2012].

    PubMed

    Herrera, Carolina; Navarro, Diego; Täger, Marlis

    2014-12-01

    Since initial antibiotic treatment in patients with urinary tract infection (UTI) is empiric, is very important to know the local epidemiology to make the correct therapeutical decisions. Determinate local features of antimicrobial resistance in pediatric patients with UTI. Retrospective review of urine culture tests of children under 15 years old, obtained in a pediatric emergency department in Valdivia, between february and december 2012. Escherichia coli showed high percentage of resistance to ampicillin (44,8%) and first generation cephalosporin (36%). A well understanding of local antimicrobial resistance profile is useful to a correct empiric treatment.

  13. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  14. Problems of psychological monitoring in astronaut training.

    PubMed

    Morgun, V V

    1997-10-01

    Monitoring of the goal-oriented psychological changes of a man during professional training is necessary. The level development of the astronaut psychic features is checked by means of psychological testing with the final aim to evaluate each professionally important psychological qualities and to evaluate in general. The list of psychological features needed for evaluation is determined and empirically selected weight factors based on wide statistical sampling is introduced. Accumulation of psychological test results can predict an astronaut's ability of solving complicated problems in a flight mission. It can help to correct the training process and reveal weakness.

  15. Quantitative simultaneous multi-element microprobe analysis using combined wavelength and energy dispersive systems

    NASA Technical Reports Server (NTRS)

    Walter, L. S.; Doan, A. S., Jr.; Wood, F. M., Jr.; Bredekamp, J. H.

    1972-01-01

    A combined WDS-EDS system obviates the severe X-ray peak overlap problems encountered with Na, Mg, Al and Si common to pure EDS systems. By application of easily measured empirical correction factors for pulse pile-up and peak overlaps which are normally observed in the analysis of silicate minerals, the accuracy of analysis is comparable with that expected for WDS electron microprobe analyses. The continuum backgrounds are subtracted for the spectra by a spline fitting technique based on integrated intensities between the peaks. The preprocessed data are then reduced to chemical analyses by existing data reduction programs.

  16. Correction techniques for depth errors with stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Holden, Anthony; Williams, Steven P.

    1992-01-01

    Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.

  17. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    NASA Astrophysics Data System (ADS)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced the bias of unshielded periods to 0.07 dB for the horizontal polarization (vertical: 0.06 dB). Applying the same model-based correction to shielded periods reduces the bias even more, to -0.03 dB and -0.01 dB, respectively. This indicates that additional attenuation could be caused also by different effects, such as reflection of sidelobes from wet surfaces and other environmental factors. Further, model-based corrections do not capture correctly the nature of WAE, but more likely provide only an empirical correction. This claim is supported by the fact that detailed analysis of particular events reveals that both antenna shielding and model-based correction performance differ substantially from event to event. Further investigation based on direct observation of antenna wetting and other environmental variables needs to be performed to identify more properly the nature of the attenuation bias. Schleiss, M., J. Rieckermann, and A. Berne, 2013: Quantification and modeling of wet-antenna attenuation for commercial microwave links. IEEE Geosci. Remote Sens. Lett., 10.1109/LGRS.2012.2236074.

  18. Asymmetric collimation: Dosimetric characteristics, treatment planning algorithm, and clinical applications

    NASA Astrophysics Data System (ADS)

    Kwa, William

    1998-11-01

    In this thesis the dosimetric characteristics of asymmetric fields are investigated and a new computation method for the dosimetry of asymmetric fields is described and implemented into an existing treatment planning algorithm. Based on this asymmetric field treatment planning algorithm, the clinical use of asymmetric fields in cancer treatment is investigated, and new treatment techniques for conformal therapy are developed. Dose calculation is verified with thermoluminescent dosimeters in a body phantom. In this thesis, an analytical approach is proposed to account for the dose reduction when a corresponding symmetric field is collimated asymmetrically to a smaller asymmetric field. This is represented by a correction factor that uses the ratio of the equivalent field dose contributions between the asymmetric and symmetric fields. The same equation used in the expression of the correction factor can be used for a wide range of asymmetric field sizes, photon energies and linear accelerators. This correction factor will account for the reduction in scatter contributions within an asymmetric field, resulting in the dose profile of an asymmetric field resembling that of a wedged field. The output factors of some linear accelerators are dependent on the collimator settings and whether the upper or lower collimators are used to set the narrower dimension of a radiation field. In addition to this collimator exchange effect for symmetric fields, asymmetric fields are also found to exhibit some asymmetric collimator backscatter effect. The proposed correction factor is extended to account for these effects. A set of correction factors determined semi-empirically to account for the dose reduction in the penumbral region and outside the radiated field is established. Since these correction factors rely only on the output factors and the tissue maximum ratios, they can easily be implemented into an existing treatment planning system. There is no need to store either additional sets of asymmetric field profiles or databases for the implementation of these correction factors into an existing in-house treatment planning system. With this asymmetric field algorithm, the computation time is found to be 20 times faster than a commercial system. This computation method can also be generalized to the dose representation of a two-fold asymmetric field whereby both the field width and length are set asymmetrically, and the calculations are not limited to points lying on one of the principal planes. The dosimetric consequences of asymmetric fields on the dose delivery in clinical situations are investigated. Examples of the clinical use of asymmetric fields are given and the potential use of asymmetric fields in conformal therapy is demonstrated. An alternative head and neck conformal therapy is described, and the treatment plan is compared to the conventional technique. The dose distributions calculated for the standard and alternative techniques are confirmed with thermoluminescent dosimeters in a body phantom at selected dose points. (Abstract shortened by UMI.)

  19. Reduction of Non-uniform Beam Filling Effects by Vertical Decorrelation: Theory and Simulations

    NASA Technical Reports Server (NTRS)

    Short, David; Nakagawa, Katsuhiro; Iguchi, Toshio

    2013-01-01

    Algorithms for estimating precipitation rates from spaceborne radar observations of apparent radar reflectivity depend on attenuation correction procedures. The algorithm suite for the Ku-band precipitation radar aboard the Tropical Rainfall Measuring Mission satellite is one such example. The well-known problem of nonuniform beam filling is a source of error in the estimates, especially in regions where intense deep convection occurs. The error is caused by unresolved horizontal variability in precipitation characteristics such as specific attenuation, rain rate, and effective reflectivity factor. This paper proposes the use of vertical decorrelation for correcting the nonuniform beam filling error developed under the assumption of a perfect vertical correlation. Empirical tests conducted using ground-based radar observations in the current simulation study show that decorrelation effects are evident in tilted convective cells. However, the problem of obtaining reasonable estimates of a governing parameter from the satellite data remains unresolved.

  20. Tracheo-bronchial soft tissue and cartilage resonances in the subglottal acoustic input impedance.

    PubMed

    Lulich, Steven M; Arsikere, Harish

    2015-06-01

    This paper offers a re-evaluation of the mechanical properties of the tracheo-bronchial soft tissues and cartilage and uses a model to examine their effects on the subglottal acoustic input impedance. It is shown that the values for soft tissue elastance and cartilage viscosity typically used in models of subglottal acoustics during phonation are not accurate, and corrected values are proposed. The calculated subglottal acoustic input impedance using these corrected values reveals clusters of weak resonances due to soft tissues (SgT) and cartilage (SgC) lining the walls of the trachea and large bronchi, which can be observed empirically in subglottal acoustic spectra. The model predicts that individuals may exhibit SgT and SgC resonances to variable degrees, depending on a number of factors including tissue mechanical properties and the dimensions of the trachea and large bronchi. Potential implications for voice production and large pulmonary airway tissue diseases are also discussed.

  1. Titan's Surface Composition from Cassini VIMS Solar Occultation Observations

    NASA Astrophysics Data System (ADS)

    McCord, Thomas; Hayne, Paul; Sotin, Christophe

    2013-04-01

    Titan's surface is obscured by a thick absorbing and scattering atmosphere, allowing direct observation of the surface within only a few spectral win-dows in the near-infrared, complicating efforts to identify and map geologi-cally important materials using remote sensing IR spectroscopy. We there-fore investigate the atmosphere's infrared transmission with direct measure-ments using Titan's occultation of the Sun as well as Titan's reflectance measured at differing illumination and observation angles observed by Cas-sini's Visual and Infrared Mapping Spectrometer (VIMS). We use two im-portant spectral windows: the 2.7-2.8-mm "double window" and the broad 5-mm window. By estimating atmospheric attenuation within these windows, we seek an empirical correction factor that can be applied to VIMS meas-urements to estimate the true surface reflectance and map inferred composi-tional variations. Applying the empirical corrections, we correct the VIMS data for the viewing geometry-dependent atmospheric effects to derive the 5-µm reflectance and 2.8/2.7-µm reflectance ratio. We then compare the cor-rected reflectances to compounds proposed to exist on Titan's surface. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scattering in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scatter-ing in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. The narrow 2.75-mm absorption feature, dividing the window into two sub-windows, present in all on-planet measurements is not present in the occultation data, and its strength is reduced at the cloud tops, suggesting the responsible molecule is concentrated in the lower troposphere or on the sur-face. Our empirical correction to Titan's surface reflectance yields properties shifted closer to water ice for the majority of the low-to-mid latitude area covered by VIMS measurements. Four compositional units are defined and mapped on Titan's surface based on the positions of data clusters in 5-mm vs. 2.8/2.7-mm scatter plots; a simple ternary mixture of H2O, hydrocarbons and CO2 might explain the reflectance properties of these surface units. The vast equatorial "dune seas" are compositionally very homogeneous, perhaps suggesting transport and mixing of particles over very large distances and/or and very consistent formation process and source material. The composi-tional branch characterizing Tui Regio and Hotei Regio is consistent with a mixture of typical Titan hydrocarbons and CO2, or possibly methane/ethane; the concentration mechanism proposed is something similar to a terrestrial playa lake evaporate deposit, based on the fact that river channels are known to feed into at least Hotei Regio.

  2. Zirconium Evaluations for ENDF/B-VII.2 for the Fast Region

    NASA Astrophysics Data System (ADS)

    Brown, D. A.; Arcilla, R.; Capote, R.; Mughabghab, S. F.; Herman, M. W.; Trkov, A.; Kim, H. I.

    2014-04-01

    We have performed a new combined set of evaluations for 90-96Zr, including new resolved resonance parameterizations from Said Mughabghab for 90,91,92,94,96Zr and fast region calculations made with EMPIRE-3.1. Because 90Zr is a magic nucleus, stable Zr isotopes are nearly spherical. A new soft-rotor optical model potential is used allowing calculations of the inelastic scattering on low-lying coupled levels of vibrational nature. A soft rotor model describes dynamical deformations of the nucleus around the spherical shape and is implemented in EMPIRE/OPTMAN code. The same potential is used with rigid rotor couplings for odd-A nuclei. This then led to improved elastic angular distributions, helping to resolve improper leakage in the older ENDF/B-VII.1β evaluation in KAPL proprietary, ZPR and TRIGA benchmarks. Another consequence of 90Zr being a magic nucleus is that the level densities in both 90Zr and 91Zr are unusually low causing the (n,el) and (n,tot) cross sections to exhibit large fluctuations above the resolved resonance region. To accommodate these fluctuations, we performed a simultaneous constrained generalized least-square fit to (n,tot) for all isotopic and elemental Zr data in EXFOR, using EMPIRE's TOTRED scaling factor. TOTRED rescales total cross sections so that the optical model calculations are unaltered by the rescaling and the correct competition between channels is maintained. In this fit, all (n,tot) data in EXFOR was used for Ein>100 keV, provided the target isotopic makeup could be correctly understood, including spectrum averaged data and data with broad energy resolution. As a result of our fitting procedure, we will have full cross material and cross reaction covariance for all Zr isotopes and reactions.

  3. Travel cost demand model based river recreation benefit estimates with on-site and household surveys: Comparative results and a correction procedure

    NASA Astrophysics Data System (ADS)

    Loomis, John

    2003-04-01

    Past recreation studies have noted that on-site or visitor intercept surveys are subject to over-sampling of avid users (i.e., endogenous stratification) and have offered econometric solutions to correct for this. However, past papers do not estimate the empirical magnitude of the bias in benefit estimates with a real data set, nor do they compare the corrected estimates to benefit estimates derived from a population sample. This paper empirically examines the magnitude of the recreation benefits per trip bias by comparing estimates from an on-site river visitor intercept survey to a household survey. The difference in average benefits is quite large, with the on-site visitor survey yielding 24 per day trip, while the household survey yields 9.67 per day trip. A simple econometric correction for endogenous stratification in our count data model lowers the benefit estimate to $9.60 per day trip, a mean value nearly identical and not statistically different from the household survey estimate.

  4. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    NASA Astrophysics Data System (ADS)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  5. Segmentation-free empirical beam hardening correction for CT.

    PubMed

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.

  6. Segmentation-free empirical beam hardening correction for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less

  7. High accuracy measurements of dry mole fractions of carbon dioxide and methane in humid air

    NASA Astrophysics Data System (ADS)

    Rella, C. W.; Chen, H.; Andrews, A. E.; Filges, A.; Gerbig, C.; Hatakka, J.; Karion, A.; Miles, N. L.; Richardson, S. J.; Steinbacher, M.; Sweeney, C.; Wastine, B.; Zellweger, C.

    2013-03-01

    Traditional techniques for measuring the mole fractions of greenhouse gases in the well-mixed atmosphere have required dry sample gas streams (dew point < -25 °C) to achieve the inter-laboratory compatibility goals set forth by the Global Atmosphere Watch programme of the World Meteorological Organisation (WMO/GAW) for carbon dioxide (±0.1 ppm in the Northern Hemisphere and ±0.05 ppm in the Southern Hemisphere) and methane (±2 ppb). Drying the sample gas to low levels of water vapour can be expensive, time-consuming, and/or problematic, especially at remote sites where access is difficult. Recent advances in optical measurement techniques, in particular cavity ring down spectroscopy, have led to the development of greenhouse gas analysers capable of simultaneous measurements of carbon dioxide, methane and water vapour. Unlike many older technologies, which can suffer from significant uncorrected interference from water vapour, these instruments permit accurate and precise greenhouse gas measurements that can meet the WMO/GAW inter-laboratory compatibility goals (WMO, 2011a) without drying the sample gas. In this paper, we present laboratory methodology for empirically deriving the water vapour correction factors, and we summarise a series of in-situ validation experiments comparing the measurements in humid gas streams to well-characterised dry-gas measurements. By using the manufacturer-supplied correction factors, the dry-mole fraction measurements have been demonstrated to be well within the GAW compatibility goals up to a water vapour concentration of at least 1%. By determining the correction factors for individual instruments once at the start of life, this water vapour concentration range can be extended to at least 2% over the life of the instrument, and if the correction factors are determined periodically over time, the evidence suggests that this range can be extended up to and even above 4% water vapour concentrations.

  8. Factorial validity of the Movement Assessment Battery for Children-2 (age band 2).

    PubMed

    Wagner, Matthias Oliver; Kastner, Julia; Petermann, Franz; Bös, Klaus

    2011-01-01

    The Movement Assessment Battery for Children-2 (M-ABC-2) is one of the most commonly used tests for the diagnosis of specific developmental disorders of motor function (F82). The M-ABC-2 comprises eight subtests per age band (AB) that are assigned to three dimensions: manual dexterity, aiming and catching, and balance. However, while previous exploratory findings suggested the correctness of the assumption of factorial validity, there is no empirical evidence that the M-ABC-2 subtests allow for a valid reproduction of the postulated factorial structure. The purpose of this study was to empirically confirm the factorial validity of the M-ABC-2. The German normative sample of AB2 (7-10 years; N=323) was used as the study sample for the empirical analyses. Confirmatory factor analysis was used to verify the factorial validity of the M-ABC-2 (AB2). The incremental fit indices (χ2=28.675; df=17; Bollen-Stine p value=0.318; RMSEA=0.046 [0.011-0.075]; SRMR=0.038; CFI=0.960) provided evidence for the factorial validity of the M-ABC-2 (AB2). However, because of a lack of empirical verification for convergent and discriminant validity, there is still no evidence that F82 can be diagnosed using M-ABC-2 (AB2). Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model

    NASA Astrophysics Data System (ADS)

    Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.

    2017-10-01

    The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.

  10. Semi-empirical calculations of line-shape parameters and their temperature dependences for the ν6 band of CH3D perturbed by N2

    NASA Astrophysics Data System (ADS)

    Dudaryonok, A. S.; Lavrentieva, N. N.; Buldyreva, J.

    2018-06-01

    (J, K)-line broadening and shift coefficients with their temperature-dependence characteristics are computed for the perpendicular (ΔK = ±1) ν6 band of the 12CH3D-N2 system. The computations are based on a semi-empirical approach which consists in the use of analytical Anderson-type expressions multiplied by a few-parameter correction factor to account for various deviations from Anderson's theory approximations. A mathematically convenient form of the correction factor is chosen on the basis of experimental rotational dependencies of line widths, and its parameters are fitted on some experimental line widths at 296 K. To get the unknown CH3D polarizability in the excited vibrational state v6 for line-shift calculations, a parametric vibration-state-dependent expression is suggested, with two parameters adjusted on some room-temperature experimental values of line shifts. Having been validated by comparison with available in the literature experimental values for various sub-branches of the band, this approach is used to generate massive data of line-shape parameters for extended ranges of rotational quantum numbers (J up to 70 and K up to 20) typically requested for spectroscopic databases. To obtain the temperature-dependence characteristics of line widths and line shifts, computations are done for various temperatures in the range 200-400 K recommended for HITRAN and least-squares fit procedures are applied. For the case of line widths strong sub-branch dependence with increasing K is observed in the R- and P-branches; for the line shifts such dependence is stated for the Q-branch.

  11. Improved forest change detection with terrain illumination corrected landsat images

    USDA-ARS?s Scientific Manuscript database

    An illumination correction algorithm has been developed to improve the accuracy of forest change detection from Landsat reflectance data. This algorithm is based on an empirical rotation model and was tested on the Landsat imagery pair over Cherokee National Forest, Tennessee, Uinta-Wasatch-Cache N...

  12. Empirical Storm-Time Correction to the International Reference Ionosphere Model E-Region Electron and Ion Density Parameterizations Using Observations from TIMED/SABER

    NASA Technical Reports Server (NTRS)

    Mertens, Christoper J.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.; Bilitza, Dieter; Xu, Xiaojing

    2007-01-01

    The response of the ionospheric E-region to solar-geomagnetic storms can be characterized using observations of infrared 4.3 micrometers emission. In particular, we utilize nighttime TIMED/SABER measurements of broadband 4.3 micrometers limb emission and derive a new data product, the NO+(v) volume emission rate, which is our primary observation-based quantity for developing an empirical storm-time correction the IRI E-region electron density. In this paper we describe our E-region proxy and outline our strategy for developing the empirical storm model. In our initial studies, we analyzed a six day storm period during the Halloween 2003 event. The results of this analysis are promising and suggest that the ap-index is a viable candidate to use as a magnetic driver for our model.

  13. Empirical Development of an MMPI Subscale for the Assessment of Combat-Related Posttraumatic Stress Disorder.

    ERIC Educational Resources Information Center

    Keane, Terence M.; And Others

    1984-01-01

    Developed empirically based criteria for use of the Minnesota Multiphasic Personality Inventory (MMPI) to aid in the assessment and diagnosis of Posttraumatic Stress Disorder (PTSD) in patients (N=200). Analysis based on an empircally derived decision rule correctly classified 74 percent of the patients in each group. (LLL)

  14. Next-Generation Sequencing of Aquatic Oligochaetes: Comparison of Experimental Communities

    PubMed Central

    Vivien, Régis; Lejzerowicz, Franck; Pawlowski, Jan

    2016-01-01

    Aquatic oligochaetes are a common group of freshwater benthic invertebrates known to be very sensitive to environmental changes and currently used as bioindicators in some countries. However, more extensive application of oligochaetes for assessing the ecological quality of sediments in watercourses and lakes would require overcoming the difficulties related to morphology-based identification of oligochaetes species. This study tested the Next-Generation Sequencing (NGS) of a standard cytochrome c oxydase I (COI) barcode as a tool for the rapid assessment of oligochaete diversity in environmental samples, based on mixed specimen samples. To know the composition of each sample we Sanger sequenced every specimen present in these samples. Our study showed that a large majority of OTUs (Operational Taxonomic Unit) could be detected by NGS analyses. We also observed congruence between the NGS and specimen abundance data for several but not all OTUs. Because the differences in sequence abundance data were consistent across samples, we exploited these variations to empirically design correction factors. We showed that such factors increased the congruence between the values of oligochaetes-based indices inferred from the NGS and the Sanger-sequenced specimen data. The validation of these correction factors by further experimental studies will be needed for the adaptation and use of NGS technology in biomonitoring studies based on oligochaete communities. PMID:26866802

  15. The Relationship between Incentives to Learn and Maslow's Hierarchy of Needs

    NASA Astrophysics Data System (ADS)

    Wu, Wenling

    This paper empirically explores lots of students in college for their hierarchy of needs and incentives to learn, and finds the linear relationship between them. With the survey, it's be found that there are some kinds of factors influence the students needs order. The paper gives several diagrams to show these important factors which affect the college students' hierarchy of needs most. The paper also finds the change of the student' hierarchy of needs will affect the variety of incentives to learn. Then the paper develops a model for qualitative analyze this relationship. Numerical examples are used to demonstrate the performance of the model. With this model the correct and useful methods can be easily selected for students to incentive according to their types of hierarchy of needs.

  16. Site correction of stochastic simulation in southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan

    2014-05-01

    Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.

  17. Tidal Amplitude Delta Factors and Phase Shifts for an Oceanic Earth

    NASA Astrophysics Data System (ADS)

    Spiridonov, E. A.

    2017-12-01

    M.S. Molodenskiy's problem, which describes the state of an elastic self-gravitating compressible sphere, is generalized to the case of a biaxial hydrostatically equilibrium rotating elliptical inelastic shell. The system of sixth-order equations is supplemented with corrections due to the relative and Coriolis accelerations. The ordinary and load Love numbers of degree 2 are calculated with allowance for their latitude dependence and dissipation for different models of the Earth's structure (the AK135, IASP91, and PREM models). The problem is solved by Love's method. The theoretical amplitude delta factors and phase shifts of second-order tidal waves for an oceanic Earth are compared with their most recent empirical counterparts obtained by the GGP network superconducting gravimeters. In particular, it is shown that a good matching (up to the fourth decimal place) of the theoretical and observed amplitude factors of semidiurnal tides does not require the application of the nonhydrostatic theory.

  18. Recent Progress in Treating Protein-Ligand Interactions with Quantum-Mechanical Methods.

    PubMed

    Yilmazer, Nusret Duygu; Korth, Martin

    2016-05-16

    We review the first successes and failures of a "new wave" of quantum chemistry-based approaches to the treatment of protein/ligand interactions. These approaches share the use of "enhanced", dispersion (D), and/or hydrogen-bond (H) corrected density functional theory (DFT) or semi-empirical quantum mechanical (SQM) methods, in combination with ensemble weighting techniques of some form to capture entropic effects. Benchmark and model system calculations in comparison to high-level theoretical as well as experimental references have shown that both DFT-D (dispersion-corrected density functional theory) and SQM-DH (dispersion and hydrogen bond-corrected semi-empirical quantum mechanical) perform much more accurately than older DFT and SQM approaches and also standard docking methods. In addition, DFT-D might soon become and SQM-DH already is fast enough to compute a large number of binding modes of comparably large protein/ligand complexes, thus allowing for a more accurate assessment of entropic effects.

  19. Believing is seeing: an evolving research program on patients' psychotherapy expectations.

    PubMed

    Constantino, Michael J

    2012-01-01

    In this article I discuss one facet of my evolving research program focused on patients' psychotherapy-related expectations. Although generally considered a common psychotherapeutic factor, expectations have been historically undervalued conceptually, empirically, and clinically. Attempting to somewhat redress this slight, I will (a) define the various forms of patients' psychotherapy-related expectations, (b) present relevant findings from research that my colleagues, students, and I have conducted, (c) summarize an integrative psychotherapy approach that underscores expectations as an explanatory construct for patients' corrective experiences, and (d) highlight future research directions for increasing our understanding of the nature and functions of the expectancy construct.

  20. Fractography and estimates of fracture origin size from fracture mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, G.D.; Swab, J.J.

    1996-12-31

    Fracture mechanics should be used routinely in fractographic analyses in order to verify that the correct feature has been identified as the fracture origin. This was highlighted in a recent Versailles Advanced Materials and Standards (VAMAS) fractographic analysis round robin. The practice of using fracture mechanics as an aid to fractographic interpretation is codified in a new ASTM Standard Practice. Conversely, very good estimates for fracture toughness often come from fractographic analysis of strength tested specimens. In many instances however, the calculated flaw size is different from the empirically-measured flaw size. This paper reviews the factors which may cause themore » discrepancies.« less

  1. HST/WFC3: Understanding and Mitigating Radiation Damage Effects in the CCD Detectors

    NASA Astrophysics Data System (ADS)

    Baggett, S.; Anderson, J.; Sosey, M.; MacKenty, J.; Gosmeyer, C.; Noeske, K.; Gunning, H.; Bourque, M.

    2015-09-01

    At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel resides a 4096x4096 pixel e2v CCD array. While these detectors are performing extremely well after more than 5 years in low-earth orbit, the cumulative effects of radiation damage cause a continual growth in the hot pixel population and a progressive loss in charge transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. Several mitigation options exist, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low background images for a relatively small noise penalty. Currently all WFC3 observers are encouraged to post-flash images with low backgrounds. Another powerful option in mitigating CTE losses is the pixel-based CTE correction. Analagous to the CTE correction software currently in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an empirical observationally-constrained model of how much charge is captured and released in order to reconstruct the image. Applied to images (with or without post-flash) after they are acquired, the software is currently available as a standalone routine. The correction will be incorporated into the standard WFC3 calibration pipeline.

  2. Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample

    ERIC Educational Resources Information Center

    Kangas, Brian D.; Branch, Marc N.

    2008-01-01

    The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…

  3. Individual Differences in Written Corrective Feedback: A Multi-Case Study

    ERIC Educational Resources Information Center

    Li, Su; Li, Pengjing

    2012-01-01

    Written corrective feedback (WCF) has been a long time practice in L2 writing instruction. However, in many cases, the effects are not satisfactory. There have been controversies about it both theoretically and empirically. This paper reports a multi-case study exploring individual differences that impact learners' responses to WCF. Four students'…

  4. The Impact of In-Service Training of Correctional Counselors.

    ERIC Educational Resources Information Center

    Smith, Thomas H.

    An empirical study was made on treatment atmosphere and shifts in interpersonal behavior in a military correctional treatment setting. The program studied was a small rehabilitation unit housing 100 to 140 enlisted men convicted by special or general court martial of various offenses ranging from AWOL to manslaughter. The objective of the unit was…

  5. Case Management in Community Corrections: Current Status and Future Directions

    ERIC Educational Resources Information Center

    Day, Andrew; Hardcastle, Lesley; Birgden, Astrid

    2012-01-01

    Case management is commonly regarded as the foundation of effective service provision across a wide range of human service settings. This article considers the case management that is offered to clients of community corrections, identifying the distinctive features of case management in this particular setting, and reviewing the empirical evidence…

  6. The use of six sigma in health care management: are we using it to its full potential?

    PubMed

    DelliFraine, Jami L; Wang, Zheng; McCaughey, Deirdre; Langabeer, James R; Erwin, Cathleen O

    2014-01-01

    Popular quality improvement tools such as Six Sigma (SS) claim to provide health care managers the opportunity to improve health care quality on the basis of sound methodology and data. However, it is unclear whether this quality improvement tool is being used correctly and improves health care quality. The authors conducted a comprehensive literature review to assess the correct use and implementation of SS and the empirical evidence demonstrating the relationship between SS and improved quality of care in health care organizations. The authors identified 310 articles on SS published in the last 15 years. However, only 55 were empirical peer-reviewed articles, 16 of which reported the correct use of SS. Only 7 of these articles included statistical analyses to test for significant changes in quality of care, and only 16 calculated defects per million opportunities or sigma level. This review demonstrates that there are significant gaps in the Six Sigma health care quality improvement literature and very weak evidence that Six Sigma is being used correctly to improve health care quality.

  7. The use of six sigma in health care management: are we using it to its full potential?

    PubMed

    DelliFraine, Jami L; Wang, Zheng; McCaughey, Deirdre; Langabeer, James R; Erwin, Cathleen O

    2013-01-01

    Popular quality improvement tools such as Six Sigma (SS) claim to provide health care managers the opportunity to improve health care quality on the basis of sound methodology and data. However, it is unclear whether this quality improvement tool is being used correctly and improves health care quality. The authors conducted a comprehensive literature review to assess the correct use and implementation of SS and the empirical evidence demonstrating the relationship between SS and improved quality of care in health care organizations. The authors identified 310 articles on SS published in the last 15 years. However, only 55 were empirical peer-reviewed articles, 16 of which reported the correct use of SS. Only 7 of these articles included statistical analyses to test for significant changes in quality of care, and only 16 calculated defects per million opportunities or sigma level. This review demonstrates that there are significant gaps in the Six Sigma health care quality improvement literature and very weak evidence that Six Sigma is being used correctly to improve health care quality.

  8. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.

  9. Item Selection and Pre-equating with Empirical Item Characteristic Curves.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    An empirical item characteristic curve shows the probability of a correct response as a function of the student's total test score. These curves can be estimated from large-scale pretest data. They enable test developers to select items that discriminate well in the score region where decisions are made. A similar set of curves can be used to…

  10. Empirical resistive-force theory for slender biological filaments in shear-thinning fluids

    NASA Astrophysics Data System (ADS)

    Riley, Emily E.; Lauga, Eric

    2017-06-01

    Many cells exploit the bending or rotation of flagellar filaments in order to self-propel in viscous fluids. While appropriate theoretical modeling is available to capture flagella locomotion in simple, Newtonian fluids, formidable computations are required to address theoretically their locomotion in complex, nonlinear fluids, e.g., mucus. Based on experimental measurements for the motion of rigid rods in non-Newtonian fluids and on the classical Carreau fluid model, we propose empirical extensions of the classical Newtonian resistive-force theory to model the waving of slender filaments in non-Newtonian fluids. By assuming the flow near the flagellum to be locally Newtonian, we propose a self-consistent way to estimate the typical shear rate in the fluid, which we then use to construct correction factors to the Newtonian local drag coefficients. The resulting non-Newtonian resistive-force theory, while empirical, is consistent with the Newtonian limit, and with the experiments. We then use our models to address waving locomotion in non-Newtonian fluids and show that the resulting swimming speeds are systematically lowered, a result which we are able to capture asymptotically and to interpret physically. An application of the models to recent experimental results on the locomotion of Caenorhabditis elegans in polymeric solutions shows reasonable agreement and thus captures the main physics of swimming in shear-thinning fluids.

  11. Low Speed and High Speed Correlation of SMART Active Flap Rotor Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B. R.

    2010-01-01

    Measured, open loop and closed loop data from the SMART rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. One open loop high-speed case and four closed loop cases are considered. The closed loop cases include three high-speed cases and one low-speed case. Two of these high-speed cases include a 2 deg flap deflection at 5P case and a test maximum-airspeed case. This study follows a recent, open loop correlation effort that used a simple correction factor for the airfoil pitching moment Mach number. Compared to the earlier effort, the current open loop study considers more fundamental corrections based on advancing blade aerodynamic conditions. The airfoil tables themselves have been studied. Selected modifications to the HH-06 section flap airfoil pitching moment table are implemented. For the closed loop condition, the effect of the flap actuator is modeled by increased flap hinge stiffness. Overall, the open loop correlation is reasonable, thus confirming the basic correctness of the current semi-empirical modifications; the closed loop correlation is also reasonable considering that the current flap model is a first generation model. Detailed correlation results are given in the paper.

  12. The Prerogative of "Corrective Recasts" as a Sign of Hegemony in the Use of Language: Further Thoughts on Eric Hauser's (2005) "Coding 'Corrective Recasts': The Maintenance of Meaning and More Fundamental Problems"

    ERIC Educational Resources Information Center

    Rajagopalan, Kanavillil

    2006-01-01

    The objective of this response article is to think through some of what I see as the far-reaching implications of a recent paper by Eric Hauser (2005) entitled "Coding 'corrective recasts': the maintenance of meaning and more fundamental problems". Hauser makes a compelling, empirically-backed case for his contention that, contrary to widespread…

  13. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  14. Corrective Feedback in SLA: Theoretical Relevance and Empirical Research

    ERIC Educational Resources Information Center

    Chen, Jin; Lin, Jianghao; Jiang, Lin

    2016-01-01

    Corrective feedback (CF) refers to the responses or treatments from teachers to a learner's nontargetlike second language (L2) production. CF has been a crucial and controversial topic in the discipline of second language acquisition (SLA). Some SLA theorists believe that CF is harmful to L2 acquisition and should be ruled out completely while…

  15. Impact of Recasts on the Accuracy in EFL Learners' Writing

    ERIC Educational Resources Information Center

    Degteva, Olga

    2011-01-01

    Since the famous Truscott's "The case against grammar correction in L2 writing class" (1996) there has been an ongoing debate in SLA research about the value of corrective feedback and its different forms. A growing number of empirical research is now investigating the question, and although more and more evidence is obtained against Truscott's…

  16. Topside correction of IRI by global modeling of ionospheric scale height using COSMIC radio occultation data

    NASA Astrophysics Data System (ADS)

    Wu, M. J.; Guo, P.; Fu, N. F.; Xu, T. L.; Xu, X. S.; Jin, H. L.; Hu, X. G.

    2016-06-01

    The ionosphere scale height is one of the most significant ionospheric parameters, which contains information about the ion and electron temperatures and dynamics in upper ionosphere. In this paper, an empirical orthogonal function (EOF) analysis method is applied to process all the ionospheric radio occultations of GPS/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) from the year 2007 to 2011 to reconstruct a global ionospheric scale height model. This monthly medium model has spatial resolution of 5° in geomagnetic latitude (-87.5° ~ 87.5°) and temporal resolution of 2 h in local time. EOF analysis preserves the characteristics of scale height quite well in the geomagnetic latitudinal, anural, seasonal, and diurnal variations. In comparison with COSMIC measurements of the year of 2012, the reconstructed model indicates a reasonable accuracy. In order to improve the topside model of International Reference Ionosphere (IRI), we attempted to adopt the scale height model in the Bent topside model by applying a scale factor q as an additional constraint. With the factor q functioning in the exponent profile of topside ionosphere, the IRI scale height should be forced equal to the precise COSMIC measurements. In this way, the IRI topside profile can be improved to get closer to the realistic density profiles. Internal quality check of this approach is carried out by comparing COSMIC realistic measurements and IRI with or without correction, respectively. In general, the initial IRI model overestimates the topside electron density to some extent, and with the correction introduced by COSMIC scale height model, the deviation of vertical total electron content (VTEC) between them is reduced. Furthermore, independent validation with Global Ionospheric Maps VTEC implies a reasonable improvement in the IRI VTEC with the topside model correction.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B., E-mail: jbneaton@lbl.gov

    Adsorption of gas molecules in metal-organic frameworks is governed by many factors, the most dominant of which are the interaction of the gas with open metal sites, and the interaction of the gas with the ligands. Herein, we examine the latter class of interaction in the context of CO{sub 2} binding to benzene. We begin by clarifying the geometry of the CO{sub 2}–benzene complex. We then generate a benchmark binding curve using a coupled-cluster approach with single, double, and perturbative triple excitations [CCSD(T)] at the complete basis set (CBS) limit. Against this ΔCCSD(T)/CBS standard, we evaluate a plethora of electronicmore » structure approximations: Hartree-Fock, second-order Møller-Plesset perturbation theory (MP2) with the resolution-of-the-identity approximation, attenuated MP2, and a number of density functionals with and without different empirical and nonempirical van der Waals corrections. We find that finite-basis MP2 significantly overbinds the complex. On the other hand, even the simplest empirical correction to standard density functionals is sufficient to bring the binding energies to well within 1 kJ/mol of the benchmark, corresponding to an error of less than 10%; PBE-D in particular performs well. Methods that explicitly include nonlocal correlation kernels, such as VV10, vdW-DF2, and ωB97X-V, perform with similar accuracy for this system, as do ωB97X and M06-L.« less

  18. First Principle Predictions of Isotopic Shifts in H2O

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We compute isotope independent first and second order corrections to the Born-Oppenheimer approximation for water and use them to predict isotopic shifts. For the diagonal correction, we use icMRCI wavefunctions and derivatives with respect to mass dependent, internal coordinates to generate the mass independent correction functions. For the non-adiabatic correction, we use scaled SCF/CIS wave functions and a generalization of the Handy method to obtain mass independent correction functions. We find that including the non-adiabatic correction gives significantly improved results compared to just including the diagonal correction when the Born-Oppenheimer potential energy surface is optimized for H2O-16. The agreement with experimental results for deuterium and tritium containing isotopes is nearly as good as our best empirical correction, however, the present correction is expected to be more reliable for higher, uncharacterized levels.

  19. A Note on Knowledge in the Schooled Society: Towards an End to the Crisis in Curriculum Theory

    ERIC Educational Resources Information Center

    Baker, David P.

    2015-01-01

    Michael Young's recent paper in this journal is correct; there is a profound crisis in curriculum theory, and to be intellectually viable into the future the field must strive to "bring back" in empirical study of curriculum. Also by ignoring the empirical content of knowledge and access to it in mass education systems throughout the…

  20. THE CALCULATION OF BURNABLE POISON CORRECTION FACTORS FOR PWR FRESH FUEL ACTIVE COLLAR MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.

    2012-06-19

    Verification of commercial low enriched uranium light water reactor fuel takes place at the fuel fabrication facility as part of the overall international nuclear safeguards solution to the civilian use of nuclear technology. The fissile mass per unit length is determined nondestructively by active neutron coincidence counting using a neutron collar. A collar comprises four slabs of high density polyethylene that surround the assembly. Three of the slabs contain {sup 3}He filled proportional counters to detect time correlated fission neutrons induced by an AmLi source placed in the fourth slab. Historically, the response of a particular collar design to amore » particular fuel assembly type has been established by careful cross-calibration to experimental absolute calibrations. Traceability exists to sources and materials held at Los Alamos National Laboratory for over 35 years. This simple yet powerful approach has ensured consistency of application. Since the 1980's there has been a steady improvement in fuel performance. The trend has been to higher burn up. This requires the use of both higher initial enrichment and greater concentrations of burnable poisons. The original analytical relationships to correct for varying fuel composition are consequently being challenged because the experimental basis for them made use of fuels of lower enrichment and lower poison content than is in use today and is envisioned for use in the near term. Thus a reassessment of the correction factors is needed. Experimental reassessment is expensive and time consuming given the great variation between fuel assemblies in circulation. Fortunately current modeling methods enable relative response functions to be calculated with high accuracy. Hence modeling provides a more convenient and cost effective means to derive correction factors which are fit for purpose with confidence. In this work we use the Monte Carlo code MCNPX with neutron coincidence tallies to calculate the influence of Gd{sub 2}O{sub 3} burnable poison on the measurement of fresh pressurized water reactor fuel. To empirically determine the response function over the range of historical and future use we have considered enrichments up to 5 wt% {sup 235}U/{sup tot}U and Gd weight fractions of up to 10 % Gd/UO{sub 2}. Parameterized correction factors are presented.« less

  1. i4OilSpill, an operational marine oil spill forecasting model for Bohai Sea

    NASA Astrophysics Data System (ADS)

    Yu, Fangjie; Yao, Fuxin; Zhao, Yang; Wang, Guansuo; Chen, Ge

    2016-10-01

    Oil spill models can effectively simulate the trajectories and fate of oil slicks, which is an essential element in contingency planning and effective response strategies prepared for oil spill accidents. However, when applied to offshore areas such as the Bohai Sea, the trajectories and fate of oil slicks would be affected by time-varying factors in a regional scale, which are assumed to be constant in most of the present models. In fact, these factors in offshore regions show much more variation over time than in the deep sea, due to offshore bathymetric and climatic characteristics. In this paper, the challenge of parameterizing these offshore factors is tackled. The remote sensing data of the region are used to analyze the modification of wind-induced drift factors, and a well-suited solution is established in parameter correction mechanism for oil spill models. The novelty of the algorithm is the self-adaptive modification mechanism of the drift factors derived from the remote sensing data for the targeted sea region, in respect to empirical constants in the present models. Considering this situation, a new regional oil spill model (i4OilSpill) for the Bohai Sea is developed, which can simulate oil transformation and fate processes by Eulerian-Lagrangian methodology. The forecasting accuracy of the proposed model is proven by the validation results in the comparison between model simulation and subsequent satellite observations on the Penglai 19-3 oil spill accident. The performance of the model parameter correction mechanism is evaluated by comparing with the real spilled oil position extracted from ASAR images.

  2. The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups.

    PubMed

    Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi

    2018-01-01

    Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.

  3. Corrective Feedback in L2 Writing: Theoretical Perspectives, Empirical Insights, and Future Directions

    ERIC Educational Resources Information Center

    Van Beuningen, Catherine

    2010-01-01

    The role of (written) corrective feedback (CF) in the process of acquiring a second language (L2) has been an issue of considerable controversy among theorists and researchers alike. Although CF is a widely applied pedagogical tool and its use finds support in SLA theory, practical and theoretical objections to its usefulness have been raised…

  4. Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management

    Treesearch

    Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell

    2000-01-01

    Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...

  5. Liquefaction assessment based on combined use of CPT and shear wave velocity measurements

    NASA Astrophysics Data System (ADS)

    Bán, Zoltán; Mahler, András; Győri, Erzsébet

    2017-04-01

    Soil liquefaction is one of the most devastating secondary effects of earthquakes and can cause significant damage in built infrastructure. For this reason liquefaction hazard shall be considered in all regions where moderate-to-high seismic activity encounters with saturated, loose, granular soil deposits. Several approaches exist to take into account this hazard, from which the in-situ test based empirical methods are the most commonly used in practice. These methods are generally based on the results of CPT, SPT or shear wave velocity measurements. In more complex or high risk projects CPT and VS measurement are often performed at the same location commonly in the form of seismic CPT. Furthermore, VS profile determined by surface wave methods can also supplement the standard CPT measurement. However, combined use of both in-situ indices in one single empirical method is limited. For this reason, the goal of this research was to develop such an empirical method within the framework of simplified empirical procedures where the results of CPT and VS measurements are used in parallel and can supplement each other. The combination of two in-situ indices, a small strain property measurement with a large strain measurement, can reduce uncertainty of empirical methods. In the first step by careful reviewing of the already existing liquefaction case history databases, sites were selected where the records of both CPT and VS measurement are available. After implementing the necessary corrections on the gathered 98 case histories with respect to fines content, overburden pressure and magnitude, a logistic regression was performed to obtain the probability contours of liquefaction occurrence. Logistic regression is often used to explore the relationship between a binary response and a set of explanatory variables. The occurrence or absence of liquefaction can be considered as binary outcome and the equivalent clean sand value of normalized overburden corrected cone tip resistance (qc1Ncs), the overburden corrected shear wave velocity (V S1), and the magnitude and effective stress corrected cyclic stress ratio (CSRM=7.5,σv'=1atm) were considered as input variables. In this case the graphical representation of the cyclic resistance ratio curve for a given probability has been replaced by a surface that separates the liquefaction and non-liquefaction cases.

  6. Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.

    2017-12-01

    The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.

  7. The Factor Content of Bilateral Trade: An Empirical Test.

    ERIC Educational Resources Information Center

    Choi, Yong-Seok; Krishna, Pravin

    2004-01-01

    The factor proportions model of international trade is one of the most influential theories in international economics. Its central standing in this field has appropriately prompted, particularly recently, intense empirical scrutiny. A substantial and growing body of empirical work has tested the predictions of the theory on the net factor content…

  8. Stress intensity factors for long, deep surface flaws in plates under extensional fields

    NASA Technical Reports Server (NTRS)

    Harms, A. E.; Smith, C. W.

    1973-01-01

    Using a singular solution for a part circular crack, a Taylor Series Correction Method (TSCM) was verified for extracting stress intensity factors from photoelastic data. Photoelastic experiments were then conducted on plates with part circular and flat bottomed cracks for flaw depth to thickness ratios of 0.25, 0.50 and 0.75 and for equivalent flaw depth to equivalent ellipse length values ranging from 0.066 to 0.319. Experimental results agreed well with the Smith theory but indicated that the use of the ''equivalent'' semi-elliptical flaw results was not valid for a/2c less than 0.20. Best overall agreement for the moderate (a/t approximately 0.5) to deep flaws (a/t approximatelly 0.75) and a/2c greater than 0.15 was found with a semi-empirical theory, when compared on the basis of equivalent flaw depth and area.

  9. Crustal Thickness Mapping of the Rifted Margin Ocean-Continent Transition using Satellite Gravity Inversion Incorporating a Lithosphere Thermal Correction

    NASA Astrophysics Data System (ADS)

    Hurst, N. W.; Kusznir, N. J.

    2005-05-01

    A new method of inverting satellite gravity at rifted continental margins to give crustal thickness, incorporating a lithosphere thermal correction, has been developed which does not use a priori information about the location of the ocean-continent transition (OCT) and provides an independent prediction of OCT location. Satellite derived gravity anomaly data (Sandwell and Smith 1997) and bathymetry data (Gebco 2003) are used to derive the mantle residual gravity anomaly which is inverted in 3D in the spectral domain to give Moho depth. Oceanic lithosphere and stretched continental margin lithosphere produce a large negative residual thermal gravity anomaly (up to -380 mgal), which must be corrected for in order to determine Moho depth. This thermal gravity correction may be determined for oceanic lithosphere using oceanic isochron data, and for the thinned continental margin lithosphere using margin rift age and beta stretching estimates iteratively derived from crustal basement thickness determined from the gravity inversion. The gravity inversion using the thermal gravity correction predicts oceanic crustal thicknesses consistent with seismic observations, while that without the thermal correction predicts much too great oceanic crustal thicknesses. Predicted Moho depth and crustal thinning across the Hatton and Faroes rifted margins, using the gravity inversion with embedded thermal correction, compare well with those produced by wide-angle seismology. A new gravity inversion method has been developed in which no isochrons are used to define the thermal gravity correction. The new method assumes all lithosphere to be initially continental and a uniform lithosphere stretching age is used corresponding to the time of continental breakup. The thinning factor produced by the gravity inversion is used to predict the thickness of oceanic crust. This new modified form of gravity inversion with embedded thermal correction provides an improved estimate of rifted continental margin crustal thinning and an improved (and isochron independent) prediction of OCT location. The new method uses an empirical relationship to predict the thickness of oceanic crust as a function of lithosphere thinning factor controlled by two input parameters: a critical thinning factor for the start of ocean crust production and the maximum oceanic crustal thickness produced when the thinning factor = 1, corresponding to infinite lithosphere stretching. The disadvantage of using a uniform stretching age corresponding to the age of continental breakup is that the inversion fails to predict increasing thermal gravity correction towards the ocean ridge and incorrectly predicts thickening of oceanic crust with decreasing oceanic age. The new gravity inversion method has been applied to N. Atlantic rifted margins. This work forms part of the NERC Margins iSIMM project. iSIMM investigators are from Liverpool and Cambridge Universities, Badley Geoscience & Schlumberger Cambridge Research supported by the NERC, the DTI, Agip UK, BP, Amerada Hess Ltd, Anadarko, ConocoPhillips, Shell, Statoil and WesternGeco. The iSIMM team comprises NJ Kusznir, RS White, AM Roberts, PAF Christie, A Chappell, J Eccles, R Fletcher, D Healy, N Hurst, ZC Lunnon, CJ Parkin, AW Roberts, LK Smith, V Tymms & R Spitzer.

  10. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    PubMed

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  11. A new empirical potential energy function for Ar2

    NASA Astrophysics Data System (ADS)

    Myatt, Philip T.; Dham, Ashok K.; Chandrasekhar, Pragna; McCourt, Frederick R. W.; Le Roy, Robert J.

    2018-06-01

    A critical re-analysis of all available spectroscopic and virial coefficient data for Ar2 has been used to determine an improved empirical analytic potential energy function that has been 'tuned' to optimise its agreement with viscosity, diffusion and thermal diffusion data, and whose short-range behaviour is in reasonably good agreement with the most recent ab initio calculations for this system. The recommended Morse/long-range potential function is smooth and differentiable at all distances, and incorporates both the correct theoretically predicted long-range behaviour and the correct limiting short-range functional behaviour. The resulting value of the well depth is ? cm-1 and the associated equilibrium distance is re = 3.766 (±0.002) Å, while the 40Ar s-wave scattering length is -714 Å.

  12. Non-Tidal Ocean Loading Correction for the Argentinean-German Geodetic Observatory Using an Empirical Model of Storm Surge for the Río de la Plata

    NASA Astrophysics Data System (ADS)

    Oreiro, F. A.; Wziontek, H.; Fiore, M. M. E.; D'Onofrio, E. E.; Brunini, C.

    2018-05-01

    The Argentinean-German Geodetic Observatory is located 13 km from the Río de la Plata, in an area that is frequently affected by storm surges that can vary the level of the river over ±3 m. Water-level information from seven tide gauge stations located in the Río de la Plata are used to calculate every hour an empirical model of water heights (tidal + non-tidal component) and an empirical model of storm surge (non-tidal component) for the period 01/2016-12/2016. Using the SPOTL software, the gravimetric response of the models and the tidal response are calculated, obtaining that for the observatory location, the range of the tidal component (3.6 nm/s2) is only 12% of the range of the non-tidal component (29.4 nm/s2). The gravimetric response of the storm surge model is subtracted from the superconducting gravimeter observations, after applying the traditional corrections, and a reduction of 7% of the RMS is obtained. The wavelet transform is applied to the same series, before and after the non-tidal correction, and a clear decrease in the spectral energy in the periods between 2 and 12 days is identify between the series. Using the same software East, North and Up displacements are calculated, and a range of 3, 2, and 11 mm is obtained, respectively. The residuals obtained after applying the non-tidal correction allow to clearly identify the influence of rain events in the superconducting gravimeter observations, indicating the need of the analysis of this, and others, hydrological and geophysical effects.

  13. The empirical Bayes estimators of fine-scale population structure in high gene flow species.

    PubMed

    Kitada, Shuichi; Nakamichi, Reiichiro; Kishino, Hirohisa

    2017-11-01

    An empirical Bayes (EB) pairwise F ST estimator was previously introduced and evaluated for its performance by numerical simulation. In this study, we conducted coalescent simulations and generated genetic population structure mechanistically, and compared the performance of the EBF ST with Nei's G ST , Nei and Chesser's bias-corrected G ST (G ST_NC ), Weir and Cockerham's θ (θ WC ) and θ with finite sample correction (θ WC_F ). We also introduced EB estimators for Hedrick' G' ST and Jost' D. We applied these estimators to publicly available SNP genotypes of Atlantic herring. We also examined the power to detect the environmental factors causing the population structure. Our coalescent simulations revealed that the finite sample correction of θ WC is necessary to assess population structure using pairwise F ST values. For microsatellite markers, EBF ST performed the best among the present estimators regarding both bias and precision under high gene flow scenarios (FST≤0.032). For 300 SNPs, EBF ST had the highest precision in all cases, but the bias was negative and greater than those for G ST_NC and θ WC_F in all cases. G ST_NC and θ WC_F performed very similarly at all levels of F ST . As the number of loci increased up to 10 000, the precision of G ST_NC and θ WC_F became slightly better than for EBF ST for cases with FST≥0.004, even though the size of the bias remained constant. The EB estimators described the fine-scale population structure of the herring and revealed that ~56% of the genetic differentiation was caused by sea surface temperature and salinity. The R package finepop for implementing all estimators used here is available on CRAN. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  14. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  15. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  16. Effects of people-centred factors on enterprise resource planning implementation project success: empirical evidence from Sri Lanka

    NASA Astrophysics Data System (ADS)

    Wickramasinghe, Vathsala; Gunawardena, Vathsala

    2010-08-01

    Extant literature suggests people-centred factors as one of the major areas influencing enterprise resource planning (ERP) implementation project success. Yet, to date, few empirical studies attempted to validate the link between people-centred factors and ERP implementation project success. The purpose of this study is to empirically identify people-centred factors that are critical to ERP implementation projects in Sri Lanka. The study develops and empirically validates a framework for people-centred factors that influence the success of ERP implementation projects. Survey research methodology was used and collected data from 74 ERP implementation projects in Sri Lanka. The people-centred factors of 'project team competence', 'rewards' and 'communication and change' were found to predict significantly the ERP implementation project success.

  17. Formulation, Implementation and Validation of a Two-Fluid model in a Fuel Cell CFD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Kunal; Cole, J. Vernon; Kumar, Sanjiv

    2008-12-01

    Water management is one of the main challenges in PEM Fuel Cells. While water is essential for membrane electrical conductivity, excess liquid water leads to flooding of catalyst layers. Despite the fact that accurate prediction of two-phase transport is key for optimal water management, understanding of the two-phase transport in fuel cells is relatively poor. Wang et. al. have studied the two-phase transport in the channel and diffusion layer separately using a multiphase mixture model. The model fails to accurately predict saturation values for high humidity inlet streams. Nguyen et. al. developed a two-dimensional, two-phase, isothermal, isobaric, steady state modelmore » of the catalyst and gas diffusion layers. The model neglects any liquid in the channel. Djilali et. al. developed a three-dimensional two-phase multicomponent model. The model is an improvement over previous models, but neglects drag between the liquid and the gas phases in the channel. In this work, we present a comprehensive two-fluid model relevant to fuel cells. Models for two-phase transport through Channel, Gas Diffusion Layer (GDL) and Channel-GDL interface, are discussed. In the channel, the gas and liquid pressures are assumed to be same. The surface tension effects in the channel are incorporated using the continuum surface force (CSF) model. The force at the surface is expressed as a volumetric body force and added as a source to the momentum equation. In the GDL, the gas and liquid are assumed to be at different pressures. The difference in the pressures (capillary pressure) is calculated using an empirical correlations. At the Channel-GDL interface, the wall adhesion affects need to be taken into account. SIMPLE-type methods recast the continuity equation into a pressure-correction equation, the solution of which then provides corrections for velocities and pressures. However, in the two-fluid model, the presence of two phasic continuity equations gives more freedom and more complications. A general approach would be to form a mixture continuity equation by linearly combining the phasic continuity equations using appropriate weighting factors. Analogous to mixture equation for pressure correction, a difference equation is used for the volume/phase fraction by taking the difference between the phasic continuity equations. The relative advantages of the above mentioned algorithmic variants for computing pressure correction and volume fractions are discussed and quantitatively assessed. Preliminary model validation is done for each component of the fuel cell. The two-phase transport in the channel is validated using empirical correlations. Transport in the GDL is validated against results obtained from LBM and VOF simulation techniques. The Channel-GDL interface transport will be validated against experiment and empirical correlation of droplet detachment at the interface.« less

  18. Use of geophysical logs to estimate the quality of ground water and the permeability of aquifers

    USGS Publications Warehouse

    Hudson, J.D.

    1996-01-01

    The relation of formation factor to resistivity of formation water and intergranular permeability has often been investigated, and the general consensus is that this relation is closest when established in a clean-sand aquifer in which water quality does not vary substantially. When these restrictions are applied, the following standard equation is a useful tool in estimating the resistance of the formation water: F = Ro/Rw, where F is the formation factor, which is a function of the effective porosity; Ro is the resistivity of a formation that is 100 percent saturated with interstitial water; and Rw is the resistivity of the water in the saturated zone. However, arenaceous aquifers can have electrical resistivities that are not directly related to resistivity of water or porosity. Surface conductivity and ion exchange are significant factors when the sediments are clay bearing. The solid constituents are a major component of the parameters needed to solve the equation for formation-water resistivity and estimates of aquifer permeability. A correction process needs to be applied to adjust the variables, Ro and F, to the equivalent of clean sand. This report presents an empirical method of using the neutron log and the electrical-resistivity values from long- and short-normal resistivity logs to correct for fine-grained material and the subsequent effects of low impedance to electrical flow that are not related to the resistance of formation water.

  19. Using ERP and WfM Systems for Implementing Business Processes: An Empirical Study

    NASA Astrophysics Data System (ADS)

    Aversano, Lerina; Tortorella, Maria

    Software systems mainly considered from enterprises for dealing with a business process automation belong to the following two categories: Workflow Management Systems (WfMS) and Enterprise Resource Planning (ERP) systems. The wider diffusion of ERP systems tends to favourite this solution, but there are several limitations of most ERP systems for automating business processes. This paper reports an empirical study aiming at comparing the ability of implementing business processes of ERP systems and WfMSs. Two different case studies have been considered in the empirical study. It evaluates and analyses the correctness and completeness of the process models implemented by using ERP and WfM systems.

  20. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    A drone aircraft equipped with an active flutter suppression system is considered with emphasis on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are given for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. The mathematical models are included and existing analytical techniques are described as well as an alternative analytical technique for obtaining closed-loop results.

  1. Effect of quantity and composition of waste on the prediction of annual methane potential from landfills.

    PubMed

    Cho, Han Sang; Moon, Hee Sun; Kim, Jae Young

    2012-04-01

    A study was conducted to investigate the effect of waste composition change on the methane production in landfills. An empirical equation for the methane potential of the mixed waste is derived based on the methane potential values of individual waste components and the compositional ratio of waste components. A correction factor was introduced in the equation and was determined from the BMP and lysimeter tests. The equation and LandGEM were applied for a full size landfill and the annual methane potential was estimated. Results showed that the changes in quantity of waste affected the annual methane potential from the landfill more than the changes of waste composition. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  3. Effective empirical corrections for basis set superposition error in the def2-SVPD basis: gCP and DFT-C

    NASA Astrophysics Data System (ADS)

    Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin

    2017-06-01

    With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.

  4. Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2016-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.

  5. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    PubMed

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  6. Empiric determination of corrected visual acuity standards for train crews.

    PubMed

    Schwartz, Steven H; Swanson, William H

    2005-08-01

    Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.

  7. Effects and mechanisms of working memory training: a review.

    PubMed

    von Bastian, Claudia C; Oberauer, Klaus

    2014-11-01

    Can cognitive abilities such as reasoning be improved through working memory training? This question is still highly controversial, with prior studies providing contradictory findings. The lack of theory-driven, systematic approaches and (occasionally serious) methodological shortcomings complicates this debate even more. This review suggests two general mechanisms mediating transfer effects that are (or are not) observed after working memory training: enhanced working memory capacity, enabling people to hold more items in working memory than before training, or enhanced efficiency using the working memory capacity available (e.g., using chunking strategies to remember more items correctly). We then highlight multiple factors that could influence these mechanisms of transfer and thus the success of training interventions. These factors include (1) the nature of the training regime (i.e., intensity, duration, and adaptivity of the training tasks) and, with it, the magnitude of improvements during training, and (2) individual differences in age, cognitive abilities, biological factors, and motivational and personality factors. Finally, we summarize the findings revealed by existing training studies for each of these factors, and thereby present a roadmap for accumulating further empirical evidence regarding the efficacy of working memory training in a systematic way.

  8. Misuse of odds ratios in obesity literature: an empirical analysis of published studies.

    PubMed

    Tajeu, Gabriel S; Sen, Bisakha; Allison, David B; Menachemi, Nir

    2012-08-01

    Odds ratios (ORs) are widely used in scientific research to demonstrate the associations between outcome variables and covariates (risk factors) of interest, and are often described in language suitable for risks or probabilities, but odds and probabilities are related, not equivalent. In situations where the outcome is not rare (e.g., obesity), ORs no longer approximate the relative risk ratio (RR) and may be misinterpreted. Our study examines the extent of misinterpretation of ORs in Obesity and International Journal of Obesity. We reviewed all 2010 issues of these journals to identify all articles that presented ORs. Included articles were then primarily reviewed for correct presentation and interpretation of ORs; and secondarily reviewed for article characteristics that may have been associated with how ORs are presented and interpreted. Of the 855 articles examined, 62 (7.3%) presented ORs. ORs were presented incorrectly in 23.2% of these articles. Clinical articles were more likely to present ORs correctly than social science or basic science articles. Studies with outcome variables that had higher relative prevalence were less likely to present ORs correctly. Overall, almost one-quarter of the studies presenting ORs in two leading journals on obesity misinterpreted them. Furthermore, even when researchers present ORs correctly, the lay media may misinterpret them as relative RRs. Therefore, we suggest that when the magnitude of associations is of interest, researchers should carefully and accurately present interpretable measures of association--including RRs and risk differences--to minimize confusion and misrepresentation of research results.

  9. Revisiting the empirical case against perceptual modularity

    PubMed Central

    Masrour, Farid; Nirshberg, Gregory; Schon, Michael; Leardi, Jason; Barrett, Emily

    2015-01-01

    Some theorists hold that the human perceptual system has a component that receives input only from units lower in the perceptual hierarchy. This thesis, that we shall here refer to as the encapsulation thesis, has been at the center of a continuing debate for the past few decades. Those who deny the encapsulation thesis often rely on the large body of psychological findings that allegedly suggest that perception is influenced by factors such as the beliefs, desires, goals, and the expectations of the perceiver. Proponents of the encapsulation thesis, however, often argue that, when correctly interpreted, these psychological findings are compatible with the thesis. In our view, the debate over the significance and the correct interpretation of these psychological findings has reached an impasse. We hold that this impasse is due to the methodological limitations over psychophysical experiments, and it is very unlikely that such experiments, on their own, could yield results that would settle the debate. After defending this claim, we argue that integrating data from cognitive neuroscience resolves the debate in favor of those who deny the encapsulation thesis. PMID:26583001

  10. Colour-dressed hexagon tessellations for correlation functions and non-planar corrections

    NASA Astrophysics Data System (ADS)

    Eden, Burkhard; Jiang, Yunfeng; le Plat, Dennis; Sfondrini, Alessandro

    2018-02-01

    We continue the study of four-point correlation functions by the hexagon tessellation approach initiated in [38] and [39]. We consider planar tree-level correlation functions in N=4 supersymmetric Yang-Mills theory involving two non-protected operators. We find that, in order to reproduce the field theory result, it is necessary to include SU( N) colour factors in the hexagon formalism; moreover, we find that the hexagon approach as it stands is naturally tailored to the single-trace part of correlation functions, and does not account for multi-trace admixtures. We discuss how to compute correlators involving double-trace operators, as well as more general 1 /N effects; in particular we compute the whole next-to-leading order in the large- N expansion of tree-level BMN two-point functions by tessellating a torus with punctures. Finally, we turn to the issue of "wrapping", Lüscher-like corrections. We show that SU( N) colour-dressing reproduces an earlier empirical rule for incorporating single-magnon wrapping, and we provide a direct interpretation of such wrapping processes in terms of N=2 supersymmetric Feynman diagrams.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearlymore » the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  13. Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.

    PubMed

    de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M

    2012-04-15

    A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.

  14. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  15. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A., E-mail: apalumbo@bnl.gov; Herman, M.; Capote, R.

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  16. Synopsis of discussion session on physicochemical factors affecting toxicity

    USGS Publications Warehouse

    Erickson, R.J.; Bills, T.D.; Clark, J.R.; Hansen, D.J.; Knezovich, J.; Hamelink, J.L.; Landrum, P.F.; Bergman, H.L.; Benson, W.H.

    1994-01-01

    The paper documents the workshop discussion regarding the role of these factors in altering toxicity. For each factor, the nature, magnitude, and uncertainty of its empirical relation to the toxicity of various chemicals or chemical classes is discussed. Limitations in the empirical database regarding the variety of species and endpoints tested were addressed. Possible mechanisms underlying the empirical relations are identified. Finally, research needed to better understand these effects is identified.

  17. An empirical potential for simulating vacancy clusters in tungsten.

    PubMed

    Mason, D R; Nguyen-Manh, D; Becquart, C S

    2017-12-20

    We present an empirical interatomic potential for tungsten, particularly well suited for simulations of vacancy-type defects. We compare energies and structures of vacancy clusters generated with the empirical potential with an extensive new database of values computed using density functional theory, and show that the new potential predicts low-energy defect structures and formation energies with high accuracy. A significant difference to other popular embedded-atom empirical potentials for tungsten is the correct prediction of surface energies. Interstitial properties and short-range pairwise behaviour remain similar to the Ackford-Thetford potential on which it is based, making this potential well-suited to simulations of microstructural evolution following irradiation damage cascades. Using atomistic kinetic Monte Carlo simulations, we predict vacancy cluster dissociation in the range 1100-1300 K, the temperature range generally associated with stage IV recovery.

  18. A Multi-Site Study on Knowledge, Attitudes, Beliefs and Practice of Child-Dog Interactions in Rural China

    PubMed Central

    Shen, Jiabin; Li, Shaohua; Xiang, Huiyun; Pang, Shulan; Xu, Guozhang; Schwebel, David C.

    2013-01-01

    This study examines demographic, cognitive and behavioral factors that predict pediatric dog-bite injury risk in rural China. A total of 1,537 children (grades 4–6) in rural regions of Anhui, Hebei and Zhejiang Provinces, China completed self-report questionnaires assessing beliefs about and behaviors with dogs. The results showed that almost 30% of children reported a history of dog bites. Children answered 56% of dog-safety knowledge items correctly. Regressions revealed both demographic and cognitive/behavioral factors predicted children’s risky interactions with dogs and dog-bite history. Boys behaved more riskily with dogs and were more frequently bitten. Older children reported greater risks with dogs and more bites. With demographics controlled, attitudes/beliefs of invulnerability, exposure frequency, and dog ownership predicted children’s self-reported risky practice with dogs. Attitudes/beliefs of invulnerability, dog exposure, and dog ownership predicted dog bites. In conclusion, both demographic and cognitive/behavioral factors influenced rural Chinese children’s dog-bite injury risk. Theory-based, empirically-supported intervention programs might reduce dog-bite injuries in rural China. PMID:23470881

  19. Two years of on-orbit gallium arsenide performance from the LIPS solar cell panel experiment

    NASA Technical Reports Server (NTRS)

    Francis, R. W.; Betz, F. E.

    1985-01-01

    The LIPS on-orbit performance of the gallium arsenide panel experiment was analyzed from flight operation telemetry data. Algorithms were developed to calculate the daily maximum power and associated solar array parameters by two independent methods. The first technique utilizes a least mean square polynomial fit to the power curve obtained with intensity and temperature corrected currents and voltages; whereas, the second incorporates an empirical expression for fill factor based on an open circuit voltage and the calculated series resistance. Maximum power, fill factor, open circuit voltage, short circuit current and series resistance of the solar cell array are examined as a function of flight time. Trends are analyzed with respect to possible mechanisms which may affect successive periods of output power during 2 years of flight operation. Degradation factors responsible for the on-orbit performance characteristics of gallium arsenide are discussed in relation to the calculated solar cell parameters. Performance trends and the potential degradation mechanisms are correlated with existing laboratory and flight data on both gallium arsenide and silicon solar cells for similar environments.

  20. Backscatter Correction Algorithm for TBI Treatment Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Nieto, B.; Sanchez-Doblado, F.; Arrans, R.

    2015-01-15

    The accuracy requirements in target dose delivery is, according to ICRU, ±5%. This is so not only in standard radiotherapy but also in total body irradiation (TBI). Physical dosimetry plays an important role in achieving this recommended level. The semi-infinite phantoms, customarily used for dosimetry purposes, give scatter conditions different to those of the finite thickness of the patient. So dose calculated in patient’s points close to beam exit surface may be overestimated. It is then necessary to quantify the backscatter factor in order to decrease the uncertainty in this dose calculation. The backward scatter has been well studied atmore » standard distances. The present work intends to evaluate the backscatter phenomenon under our particular TBI treatment conditions. As a consequence of this study, a semi-empirical expression has been derived to calculate (within 0.3% uncertainty) the backscatter factor. This factor depends lineally on the depth and exponentially on the underlying tissue. Differences found in the qualitative behavior with respect to standard distances are due to scatter in the bunker wall close to the measurement point.« less

  1. How rational should bioethics be? The value of empirical approaches.

    PubMed

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  2. Multi-Angle Implementation of Atmospheric Correction for MODIS (MAIAC). Part 3: Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Hilker, T.; Hall, F.; Sellers, P.; Tucker, J.; Korkin, S.

    2012-01-01

    This paper describes the atmospheric correction (AC) component of the Multi-Angle Implementation of Atmospheric Correction algorithm (MAIAC) which introduces a new way to compute parameters of the Ross-Thick Li-Sparse (RTLS) Bi-directional reflectance distribution function (BRDF), spectral surface albedo and bidirectional reflectance factors (BRF) from satellite measurements obtained by the Moderate Resolution Imaging Spectroradiometer (MODIS). MAIAC uses a time series and spatial analysis for cloud detection, aerosol retrievals and atmospheric correction. It implements a moving window of up to 16 days of MODIS data gridded to 1 km resolution in a selected projection. The RTLS parameters are computed directly by fitting the cloud-free MODIS top of atmosphere (TOA) reflectance data stored in the processing queue. The RTLS retrieval is applied when the land surface is stable or changes slowly. In case of rapid or large magnitude change (as for instance caused by disturbance), MAIAC follows the MODIS operational BRDF/albedo algorithm and uses a scaling approach where the BRDF shape is assumed stable but its magnitude is adjusted based on the latest single measurement. To assess the stability of the surface, MAIAC features a change detection algorithm which analyzes relative change of reflectance in the Red and NIR bands during the accumulation period. To adjust for the reflectance variability with the sun-observer geometry and allow comparison among different days (view geometries), the BRFs are normalized to the fixed view geometry using the RTLS model. An empirical analysis of MODIS data suggests that the RTLS inversion remains robust when the relative change of geometry-normalized reflectance stays below 15%. This first of two papers introduces the algorithm, a second, companion paper illustrates its potential by analyzing MODIS data over a tropical rainforest and assessing errors and uncertainties of MAIAC compared to conventional MODIS products.

  3. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    PubMed

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  4. Bi-dimensional empirical mode decomposition based fringe-like pattern suppression in polarization interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Cao, Qizhi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Xie, Yingge; Wang, Guodong; Zhang, Sheqi

    2018-01-01

    Many observers using interference imaging spectrometer were plagued by the fringe-like pattern(FP) that occurs for optical wavelengths in red and near-infrared region. It brings us more difficulties in the data processing such as the spectrum calibration, information retrieval, and so on. An adaptive method based on the bi-dimensional empirical mode decomposition was developed to suppress the nonlinear FP in polarization interference imaging spectrometer. The FP and corrected interferogram were separated effectively. Meanwhile, the stripes introduced by CCD mosaic was suppressed. The nonlinear interferogram background removal and the spectrum distortion correction were implemented as well. It provides us an alternative method to adaptively suppress the nonlinear FP without prior experimental data and knowledge. This approach potentially is a powerful tool in the fields of Fourier transform spectroscopy, holographic imaging, optical measurement based on moire fringe, etc.

  5. Modified Mercalli Intensities (MMI) for some earthquakes in eastern North America (ENA) and empirical MMI site corrections for towns in ENA

    USGS Publications Warehouse

    Bakun, W.H.; Johnston, A.C.; Hopper, M.G.

    2002-01-01

    Modified Mercalli Intensity (MMI) assignments for earthquakes in eastern North America (ENA) were used by Bakun et al. (submitted) to develop a model for eastern North America for estimating the location and moment magnitude M of earthquakes from MMI observations. MMI assignments for most of the earthquakes considered by Bakun et al. (submitted) are published. MMI assignments for 6 other earthquakes used by Bakun et al. (submitted) are listed in this report: November 18, 1755 near Cape Ann, Massachusetts; January 5, 1843 near Marked Tree, Arkansas; October 31, 1895 in southern Illinois; November 18, 1929 on the Grand Banks, Newfoundland; September 26, 1990 in southeast Missouri; and May 4, 1991 near Risco, Missouri. MMI empirical site corrections developed and used by Bakun et al. (submitted) are also listed in this report.

  6. Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs

    2017-11-01

    A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.

  7. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    PubMed

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  8. Evolution of the empirical and theoretical foundations of eyewitness identification reform.

    PubMed

    Clark, Steven E; Moreland, Molly B; Gronlund, Scott D

    2014-04-01

    Scientists in many disciplines have begun to raise questions about the evolution of research findings over time (Ioannidis in Epidemiology, 19, 640-648, 2008; Jennions & Møller in Proceedings of the Royal Society, Biological Sciences, 269, 43-48, 2002; Mullen, Muellerleile, & Bryan in Personality and Social Psychology Bulletin, 27, 1450-1462, 2001; Schooler in Nature, 470, 437, 2011), since many phenomena exhibit decline effects-reductions in the magnitudes of effect sizes as empirical evidence accumulates. The present article examines empirical and theoretical evolution in eyewitness identification research. For decades, the field has held that there are identification procedures that, if implemented by law enforcement, would increase eyewitness accuracy, either by reducing false identifications, with little or no change in correct identifications, or by increasing correct identifications, with little or no change in false identifications. Despite the durability of this no-cost view, it is unambiguously contradicted by data (Clark in Perspectives on Psychological Science, 7, 238-259, 2012a; Clark & Godfrey in Psychonomic Bulletin & Review, 16, 22-42, 2009; Clark, Moreland, & Rush, 2013; Palmer & Brewer in Law and Human Behavior, 36, 247-255, 2012), raising questions as to how the no-cost view became well-accepted and endured for so long. Our analyses suggest that (1) seminal studies produced, or were interpreted as having produced, the no-cost pattern of results; (2) a compelling theory was developed that appeared to account for the no-cost pattern; (3) empirical results changed over the years, and subsequent studies did not reliably replicate the no-cost pattern; and (4) the no-cost view survived despite the accumulation of contradictory empirical evidence. Theories of memory that were ruled out by early data now appear to be supported by data, and the theory developed to account for early data now appears to be incorrect.

  9. Empirical research on international environmental migration: a systematic review.

    PubMed

    Obokata, Reiko; Veronis, Luisa; McLeman, Robert

    2014-01-01

    This paper presents the findings of a systematic review of scholarly publications that report empirical findings from studies of environmentally-related international migration. There exists a small, but growing accumulation of empirical studies that consider environmentally-linked migration that spans international borders. These studies provide useful evidence for scholars and policymakers in understanding how environmental factors interact with political, economic and social factors to influence migration behavior and outcomes that are specific to international movements of people, in highlighting promising future research directions, and in raising important considerations for international policymaking. Our review identifies countries of migrant origin and destination that have so far been the subject of empirical research, the environmental factors believed to have influenced these migrations, the interactions of environmental and non-environmental factors as well as the role of context in influencing migration behavior, and the types of methods used by researchers. In reporting our findings, we identify the strengths and challenges associated with the main empirical approaches, highlight significant gaps and future opportunities for empirical work, and contribute to advancing understanding of environmental influences on international migration more generally. Specifically, we propose an exploratory framework to take into account the role of context in shaping environmental migration across borders, including the dynamic and complex interactions between environmental and non-environmental factors at a range of scales.

  10. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  11. Assessment of microclimate conditions under artificial shades in a ginseng field.

    PubMed

    Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo

    2016-01-01

    Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development.

  12. Assessment of microclimate conditions under artificial shades in a ginseng field

    PubMed Central

    Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo

    2015-01-01

    Background Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Methods Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Results Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Conclusion Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development. PMID:26843827

  13. What can we learn about dispersion from the conformer surface of n-pentane?

    PubMed

    Martin, Jan M L

    2013-04-11

    In earlier work [Gruzman, D. ; Karton, A.; Martin, J. M. L. J. Phys. Chem. A 2009, 113, 11974], we showed that conformer energies in alkanes (and other systems) are highly dispersion-driven and that uncorrected DFT functionals fail badly at reproducing them, while simple empirical dispersion corrections tend to overcorrect. To gain greater insight into the nature of the phenomenon, we have mapped the torsional surface of n-pentane to 10-degree resolution at the CCSD(T)-F12 level near the basis set limit. The data obtained have been decomposed by order of perturbation theory, excitation level, and same-spin vs opposite-spin character. A large number of approximate electronic structure methods have been considered, as well as several empirical dispersion corrections. Our chief conclusions are as follows: (a) the effect of dispersion is dominated by same-spin correlation (or triplet-pair correlation, from a different perspective); (b) singlet-pair correlation is important for the surface, but qualitatively very dissimilar to the dispersion component; (c) single and double excitations beyond third order are essentially unimportant for this surface; (d) connected triple excitations do play a role but are statistically very similar to the MP2 singlet-pair correlation; (e) the form of the damping function is crucial for good performance of empirical dispersion corrections; (f) at least in the lower-energy regions, SCS-MP2 and especially MP2.5 perform very well; (g) novel spin-component scaled double hybrid functionals such as DSD-PBEP86-D2 acquit themselves very well for this problem.

  14. Combined effects of wind and solar irradiance on the spatial variation of midday air temperature over a mountainous terrain

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Ock; Kim, Jin-Hee; Kim, Dae-Jun; Shim, Kyo Moon; Yun, Jin I.

    2015-08-01

    When the midday temperature distribution in a mountainous region was estimated using data from a nearby weather station, the correction of elevation difference based on temperature lapse caused a large error. An empirical approach reflecting the effects of solar irradiance and advection was suggested in order to increase the reliability of the results. The normalized slope irradiance, which was determined by normalizing the solar irradiance difference between a horizontal surface and a sloping surface from 1100 to 1500 LST on a clear day, and the deviation relationship between the horizontal surface and the sloping surface at the 1500 LST temperature on each day were presented as simple empirical formulas. In order to simulate the phenomenon that causes immigrant air parcels to push out or mix with the existing air parcels in order to decrease the solar radiation effects, an advection correction factor was added to exponentially reduce the solar radiation effect with an increase in wind speed. In order to validate this technique, we estimated the 1500 LST air temperatures on 177 clear days in 2012 and 2013 at 10 sites with different slope aspects in a mountainous catchment and compared these values to the actual measured data. The results showed that this technique greatly improved the error bias and the overestimation of the solar radiation effect in comparison with the existing methods. By applying this technique to the Korea Meteorological Administration's 5-km grid data, it was possible to determine the temperature distribution at a 30-m resolution over a mountainous rural area south of Jiri Mountain National Park, Korea.

  15. Many-Body Effects on Bandgap Shrinkage, Effective Masses, and Alpha Factor

    NASA Technical Reports Server (NTRS)

    Li, Jian-Zhong; Ning, C. Z.; Woo, Alex C. (Technical Monitor)

    2000-01-01

    Many-body Coulomb effects influence the operation of quantum-well (QW) laser diode (LD) strongly. In the present work, we study a two-band electron-hole plasma (EHP) within the Hatree-Fock approximation and the single plasmon pole approximation for static screening. Full inclusion of momentum dependence in the many-body effects is considered. An empirical expression for carrier density dependence of the bandgap renormalization (BGR) in an 8 nm GaAs/Al(0.3)G(4.7)As single QW will be given, which demonstrates a non-universal scaling behavior for quasi-two-dimension structures, due to size-dependent efficiency of screening. In addition, effective mass renormalization (EMR) due to momentum-dependent self-energy many-body correction, for both electrons and holes is studied and serves as another manifestation of the many-body effects. Finally, the effects on carrier density dependence of the alpha factor is evaluated to assess the sensitivity of the full inclusion of momentum dependence.

  16. Tables of X-ray absorption corrections and dispersion corrections: the new versus the old

    NASA Astrophysics Data System (ADS)

    Creagh, Dudley

    1990-11-01

    This paper compares the data on X-ray absorption coefficients calculated by Creagh and Hubbell and tabulated in International Tables for Crystallography, vol. C, ed. A.J.C. Wilson (1990) section 4.2.4 [1] with empirical (Saloman, Hubbell and Scofield, At. Data and Nucl. Data Tables 38 (1988) 1, [6]) and semi-empirical (Hubbell, McMaster, Kerr Del Grande and Mallett, in: International Tables for Crystallography, vol. IV, eds. Ibers and Hamilton (Kynoch, Birmingham, 1974) [2]) tabulations as well as the renormalized relativistic Dirac-Hartree-Fock calculations of Scofield [6]. It also makes comparisons of the real part of the dispersion correction ƒ‧(ω, 0) and tabulated in ref. [1], with theoretical data sets (Cromer and Liberman, J. Chem. Phys. 53 (1970) 1891, and Acta Crystallogr. A37 (1981) 267 [4,5]; Wang, Phys. Rev. A34 (1986) 636 [85]; Kissel, in: Workshop Report on New Dimensions in X-ray Scattering, CONF-870459 (Livermore, 1987) p. 9 [86]) and data collected using a variety of experimental techniques. In both cases the data tabulated in ref. [1] is shown to give improved self-consistency and agreement with experiment.

  17. Children as donors: a national study to assess procurement of organs and tissues in pediatric intensive care units.

    PubMed

    Siebelink, Marion J; Albers, Marcel J I J; Roodbol, Petrie F; Van de Wiel, Harry B M

    2012-12-01

    A shortage of size-matched organs and tissues is the key factor limiting transplantation in children. Empirical data on procurement from pediatric donors is sparse. This study investigated donor identification, parental consent, and effectuation rates, as well as adherence to the national protocol. A national retrospective cohort study was conducted in all eight Dutch pediatric intensive care units. Records of deceased children were analyzed by an independent donation officer. Seventy-four (11%) of 683 deceased children were found to be suitable for organ donation and 132 (19%) for tissue donation. Sixty-two (84%) potential organ donors had been correctly identified; the parental consent and effectuation rate was 42%. Sixty-three (48%) potential tissue donors had been correctly identified; the parental consent and effectuation rate was 27%. Correct identification increased with age (logistic regression, organs: P = .024; tissues: P = .011). Although an overall identification rate of 84% of potential organ donors may seem acceptable, the variation observed suggests room for improvement, as does the overall low rate of identification of pediatric tissue donors. Efforts to address the shortage of organs and tissues for transplantation in children should focus on identifying potential donors and on the reasons why parents do not consent. © 2012 The Authors. Transplant International © 2012 European Society for Organ Transplantation.

  18. Normal-faulting slip maxima and stress-drop variability: a geological perspective

    USGS Publications Warehouse

    Hecker, S.; Dawson, T.E.; Schwartz, D.P.

    2010-01-01

    We present an empirical estimate of maximum slip in continental normal-faulting earthquakes and present evidence that stress drop in intraplate extensional environments is dependent on fault maturity. A survey of reported slip in historical earthquakes globally and in latest Quaternary paleoearthquakes in the Western Cordillera of the United States indicates maximum vertical displacements as large as 6–6.5 m. A difference in the ratio of maximum-to-mean displacements between data sets of prehistoric and historical earthquakes, together with constraints on bias in estimates of mean paleodisplacement, suggest that applying a correction factor of 1.4±0.3 to the largest observed displacement along a paleorupture may provide a reasonable estimate of the maximum displacement. Adjusting the largest paleodisplacements in our regional data set (~6 m) by a factor of 1.4 yields a possible upper-bound vertical displacement for the Western Cordillera of about 8.4 m, although a smaller correction factor may be more appropriate for the longest ruptures. Because maximum slip is highly localized along strike, if such large displacements occur, they are extremely rare. Static stress drop in surface-rupturing earthquakes in the Western Cordillera, as represented by maximum reported displacement as a fraction of modeled rupture length, appears to be larger on normal faults with low cumulative geologic displacement (<2 km) and larger in regions such as the Rocky Mountains, where immature, low-throw faults are concentrated. This conclusion is consistent with a growing recognition that structural development influences stress drop and indicates that this influence is significant enough to be evident among faults within a single intraplate environment.

  19. SU-F-T-408: On the Determination of Equivalent Squares for Rectangular Small MV Photon Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauer, OA; Wegener, S; Exner, F

    Purpose: It is common practice to tabulate dosimetric data like output factors, scatter factors and detector signal correction factors for a set of square fields. In order to get the data for an arbitrary field, it is mapped to an equivalent square, having the same scatter as the field of interest. For rectangular fields both, tabulated data and empiric formula exist. We tested the applicability of such rules for very small fields. Methods: Using the Monte-Carlo method (EGSnrc-doseRZ), the dose to a point in 10cm depth in water was calculated for cylindrical impinging fluence distributions. Radii were from 0.5mm tomore » 11.5mm with 1mm thickness of the rings. Different photon energies were investigated. With these data a matrix was constructed assigning the amount of dose to the field center to each matrix element. By summing up the elements belonging to a certain field, the dose for an arbitrary point in 10cm depth could be determined. This was done for rectangles up to 21mm side length. Comparing the dose to square field results, equivalent squares could be assigned. The results were compared to using the geometrical mean and the 4Xperimeter/area rule. Results: For side length differences less than 2mm, the difference between all methods was in general less than 0.2mm. For more elongated fields, relevant differences of more than 1mm and up to 3mm for the fields investigated occurred. The mean square side length calculated from both empiric formulas fitted much better, deviating hardly more than 1mm and for the very elongated fields only. Conclusion: For small rectangular photon fields, deviating only moderately from square both investigated empiric methods are sufficiently accurate. As the deviations often differ regarding their sign, using the mean improves the accuracy and the useable elongation range. For ratios larger than 2, Monte-Carlo generated data are recommended. SW is funded by Deutsche Forschungsgemeinschaft (SA481/10-1)« less

  20. [Factors conditioning primary care services utilization. Empirical evidence and methodological inconsistencies].

    PubMed

    Sáez, M

    2003-01-01

    In Spain, the degree and characteristics of primary care services utilization have been the subject of analysis since at least the 1980s. One of the main reasons for this interest is to assess the extent to which utilization matches primary care needs. In fact, the provision of an adequate health service for those who most need it is a generally accepted priority. The evidence shows that individual characteristics, mainly health status, are the factors most closely related to primary care utilization. Other personal characteristics, such as gender and age, could act as modulators of health care need. Some family and/or cultural variables, as well as factors related to the health care professional and institutions, could explain some of the observed variability in primary care services utilization. Socioeconomic variables, such as income, reveal a paradox. From an aggregate perspective, income is the main determinant of utilization as well as of health care expenditure. When data are analyzed for individuals, however, income is not related to primary health utilization. The situation is controversial, with methodological implications and, above all, consequences for the assessment of the efficiency in primary care utilization. Review of the literature reveals certain methodological inconsistencies that could at least partly explain the disparity of the empirical results. Among others, the following flaws can be highlighted: design problems, measurement errors, misspecification, and misleading statistical methods.Some solutions, among others, are quasi-experiments, the use of large administrative databases and of primary data sources (design problems); differentiation between types of utilization and between units of analysis other than consultations, and correction of measurement errors in the explanatory variables (measurement errors); consideration of relevant explanatory variables (misspecification); and the use of multilevel models (statistical methods).

  1. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    NASA Astrophysics Data System (ADS)

    Alarcón, J. M.; Weiss, C.

    2018-05-01

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining chiral effective field theory (χ EFT ) and dispersion analysis. The spectral functions on the two-pion cut at t >4 Mπ2 are constructed using the elastic unitarity relation and an N /D representation. χ EFT is used to calculate the real functions J±1(t ) =f±1(t ) /Fπ(t ) (ratios of the complex π π →N N ¯ partial-wave amplitudes and the timelike pion FF), which are free of π π rescattering. Rescattering effects are included through the empirical timelike pion FF | Fπ(t) | 2 . The method allows us to compute the isovector EM spectral functions up to t ˜1 GeV2 with controlled accuracy (leading order, next-to-leading order, and partial next-to-next-to-leading order). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at t =0 (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives, which are not affected by higher-order chiral corrections and are obtained almost parameter-free in our approach, and explain their collective behavior. We estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-Q2 FF data is achieved up to ˜0.5 GeV2 for GE, and up to ˜0.2 GeV2 for GM. Our results can be used to guide the analysis of low-Q2 elastic scattering data and the extraction of the proton charge radius.

  2. Improvement of the Correlative AFM and ToF-SIMS Approach Using an Empirical Sputter Model for 3D Chemical Characterization.

    PubMed

    Terlier, T; Lee, J; Lee, K; Lee, Y

    2018-02-06

    Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.

  3. Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes

    NASA Astrophysics Data System (ADS)

    Liao, H.; Meyer, F. J.

    2016-12-01

    Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a set of system (range bandwidth, temporal and spatial baseline) and processing parameters (e.g., filtering strength and sub-band configuration). A case study in Greenland is attached below.

  4. Success rate and risk factors for failure of empirical antifungal therapy with itraconazole in patients with hematological malignancies: a multicenter, prospective, open-label, observational study in Korea.

    PubMed

    Kim, Soo-Jeong; Cheong, June-Won; Min, Yoo Hong; Choi, Young Jin; Lee, Dong-Gun; Lee, Je-Hwan; Yang, Deok-Hwan; Lee, Sang Min; Kim, Sung-Hyun; Kim, Yang Soo; Kwak, Jae-Yong; Park, Jinny; Kim, Jin Young; Kim, Hoon-Gu; Kim, Byung Soo; Ryoo, Hun-Mo; Jang, Jun Ho; Kim, Min Kyoung; Kang, Hye Jin; Cho, In Sung; Mun, Yeung Chul; Jo, Deog-Yeon; Kim, Ho Young; Park, Byeong-Bae; Kim, Jin Seok

    2014-01-01

    We assessed the success rate of empirical antifungal therapy with itraconazole and evaluated risk factors for predicting the failure of empirical antifungal therapy. A multicenter, prospective, observational study was performed in patients with hematological malignancies who had neutropenic fever and received empirical antifungal therapy with itraconazole at 22 centers. A total of 391 patients who had abnormal findings on chest imaging tests (31.0%) or a positive result of enzyme immunoassay for serum galactomannan (17.6%) showed a 56.5% overall success rate. Positive galactomannan tests before the initiation of the empirical antifungal therapy (P=0.026, hazard ratio [HR], 2.28; 95% confidence interval [CI], 1.10-4.69) and abnormal findings on the chest imaging tests before initiation of the empirical antifungal therapy (P=0.022, HR, 2.03; 95% CI, 1.11-3.71) were significantly associated with poor outcomes for the empirical antifungal therapy. Eight patients (2.0%) had premature discontinuation of itraconazole therapy due to toxicity. It is suggested that positive galactomannan tests and abnormal findings on the chest imaging tests at the time of initiation of the empirical antifungal therapy are risk factors for predicting the failure of the empirical antifungal therapy with itraconazole. (Clinical Trial Registration on National Cancer Institute website, NCT01060462).

  5. Learning in tele-autonomous systems using Soar

    NASA Technical Reports Server (NTRS)

    Laird, John E.; Yager, Eric S.; Tuck, Christopher M.; Hucka, Michael

    1989-01-01

    Robo-Soar is a high-level robot arm control system implemented in Soar. Robo-Soar learns to perform simple block manipulation tasks using advice from a human. Following learning, the system is able to perform similar tasks without external guidance. It can also learn to correct its knowledge, using its own problem solving in addition to outside guidance. Robo-Soar corrects its knowledge by accepting advice about relevance of features in its domain, using a unique integration of analytic and empirical learning techniques.

  6. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  7. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select.

    PubMed

    Bix, Laura; Seo, Do Chan; Ladoni, Moslem; Brunk, Eric; Becker, Mark W

    2016-01-01

    Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling. Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding) to optimize a label for comparison with those typical of commercial medical devices. Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not). Participants were instructed to select the label along a given criteria (e.g., latex containing) as quickly as possible. Dependent variables were binary (correct selection) and continuous (time to correct selection). Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST) conferences, and using a targeted e-mail of AST members. Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05). Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05). Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols) were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001) LSM; UCL, LCL: 97.3%; 98.4%, 95.5%)), as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3%) and time to selection. Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.

  8. An Empirical Evaluation of Factor Reliability.

    ERIC Educational Resources Information Center

    Jackson, Douglas N.; Morf, Martin E.

    The psychometric reliability of a factor, defined as its generalizability across samples drawn from the same population of tests, is considered as a necessary precondition for the scientific meaningfulness of factor analytic results. A solution to the problem of generalizability is illustrated empirically on data from a set of tests designed to…

  9. Shared Genetic Background for Regulation of Mood and Sleep: Association of GRIA3 with Sleep Duration in Healthy Finnish Women

    PubMed Central

    Utge, Siddheshwar; Kronholm, Erkki; Partonen, Timo; Soronen, Pia; Ollila, Hanna M.; Loukola, Anu; Perola, Markus; Salomaa, Veikko; Porkka-Heiskanen, Tarja; Paunio, Tiina

    2011-01-01

    Study Objectives: Sleeping 7 to 8 hours per night appears to be optimal, since both shorter and longer sleep times are related to increased morbidity and mortality. Depressive disorder is almost invariably accompanied by disturbed sleep, leading to decreased sleep duration, and disturbed sleep may be a precipitating factor in the initiation of depressive illness. Here, we examined whether, in healthy individuals, sleep duration is associated with genes that we earlier found to be associated with depressive disorder. Design: Population-based molecular genetic study. Setting: Regression analysis of 23 risk variants for depressive disorder from 12 genes to sleep duration in healthy individuals. Participants: Three thousand, one hundred, forty-seven individuals (25–75 y) from population-based Health 2000 and FINRISK 2007 samples. Measurements and Results: We found a significant association of rs687577 from GRIA3 on the X-chromosome with sleep duration in women (permutation-based corrected empirical P = 0.00001, β = 0.27; Bonferroni corrected P = 0.0052; f = 0.11). The frequency of C/C genotype previously found to increase risk for depression in women was highest among those who slept for 8 hours or less in all age groups younger than 70 years. Its frequency decreased with the lengthening of sleep duration, and those who slept for 9 to 10 hours showed a higher frequency of C/A or A/A genotypes, when compared with the midrange sleepers (7-8 hours) (permutation-based corrected empirical P = 0.0003, OR = 1.81). Conclusions: The GRIA3 polymorphism that was previously found to be associated with depressive disorder in women showed an association with sleep duration in healthy women. Mood disorders and short sleep may share a common genetic background and biologic mechanisms that involve glutamatergic neurotransmission. Citation: Utge S; Kronholm E; Partonen T; Soronen P; Ollila HM; Loukola A; Perola M; Salomaa V; Porkka-Heiskanen T; Paunio T. Shared genetic background for regulation of mood and sleep: association of GRIA3 with sleep duration in healthy Finnish women. SLEEP 2011;34(10):1309-1316. PMID:21966062

  10. Dual disseminated infection with Nocardia farcinica and Mucor in a patient with systemic lupus erythematosus: a case report.

    PubMed

    de Clerck, Frederik; Van Ryckeghem, Florence; Depuydt, Pieter; Benoit, Dominque; Druwé, Patrick; Hugel, Arnika; Claeys, Geert; Cools, Piet; Decruyenaere, Johan

    2014-11-20

    Infections remain a major cause of morbidity and mortality in immunocompromised patients and require early diagnosis and treatment. However, correct diagnosis and treatment are often delayed by a multitude of factors. We report what we believe to be the first case of a combined disseminated infection with Nocardia and Mucor in a patient with systemic lupus erythematosus. A 74-year-old Caucasian woman with systemic lupus erythematosus presented with recurrent pneumonia. Despite empirical treatment with antibiotics, her condition gradually deteriorated. Microbiological sampling by thoracoscopy revealed the presence of Nocardia. Despite the institution of therapy for disseminated nocardiosis, she died of multi-organ failure. A post-mortem investigation confirmed nocardiosis, but showed concomitant disseminated mucormycosis infection as well. Members of the bacterial genus Nocardia and the fungal genus Mucor are ubiquitous in the environment, have the ability to spread to virtually any organ, and are remarkably resistant to appropriate therapy. Both pathogens can mimic other pathologies both on clinical and radiological investigations. Invasive sampling procedures are often needed to prove their presence. Establishing a timely, correct diagnosis and a specific treatment is essential for patient survival.

  11. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  12. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  13. How much detail is needed in modeling a transcranial magnetic stimulation figure-8 coil: Measurements and brain simulations

    PubMed Central

    Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.

    2017-01-01

    Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923

  14. Knowledge and Regulation of Cognition in College Science Students

    ERIC Educational Resources Information Center

    Roshanaei, Mehrnaz

    2014-01-01

    The research focused on three issues in college science students: whether there was empirical support for the two factor (knowledge of cognition and regulation of cognition) view of metacognition, whether the two factors were related to each other, and whether either of the factors was related to empirical measures of cognitive and metacognitive…

  15. CORRECTIONS FOR RACIAL DISPARITIES IN LAW ENFORCEMENT

    PubMed Central

    Griffin, Christopher L.; Sloan, Frank A.; Eldred, Lindsey M.

    2016-01-01

    Much empirical analysis has documented racial disparities at the beginning and end stages of a criminal case. However, our understanding about the perpetuation of — and even corrections for — differential outcomes as the process unfolds remains less than complete. This Article provides a comprehensive examination of criminal dispositions using all DWI cases in North Carolina during the period 2001–2011, focusing on several major decision points in the process. Starting with pretrial hearings and culminating in sentencing results, we track differences in outcomes by race and gender. Before sentencing, significant gaps emerge in the severity of pretrial release conditions that disadvantage black and Hispanic defendants. Yet when prosecutors decide whether to pursue charges, we observe an initial correction mechanism: Hispanic men are almost two-thirds more likely to have those charges dropped relative to white men. Although few cases survive after the plea bargaining stage, a second correction mechanism arises: Hispanic men are substantially less likely to receive harsher sentences and are sent to jail for significantly less time relative to white men. The first mechanism is based in part on prosecutors’ reviewing the strength of the evidence but much more on declining to invest scarce resources in the pursuit of defendants who fail to appear for trial. The second mechanism seems to follow more directly from judicial discretion to reverse decisions made by law enforcement. We discuss possible explanations for these novel empirical results and review methods for more precisely identifying causal mechanisms in criminal justice. PMID:28066033

  16. High throughput film dosimetry in homogeneous and heterogeneous media for a small animal irradiator

    PubMed Central

    Wack, L.; Ngwa, W.; Tryggestad, E.; Tsiamas, P.; Berbeco, R.; Ng, S.K.; Hesser, J.

    2013-01-01

    Purpose We have established a high-throughput Gafchromic film dosimetry protocol for narrow kilo-voltage beams in homogeneous and heterogeneous media for small-animal radiotherapy applications. The kV beam characterization is based on extensive Gafchromic film dosimetry data acquired in homogeneous and heterogeneous media. An empirical model is used for parameterization of depth and off-axis dependence of measured data. Methods We have modified previously published methods of film dosimetry to suit the specific tasks of the study. Unlike film protocols used in previous studies, our protocol employs simultaneous multichannel scanning and analysis of up to nine Gafchromic films per scan. A scanner and background correction were implemented to improve accuracy of the measurements. Measurements were taken in homogeneous and inhomogeneous phantoms at 220 kVp and a field size of 5 × 5 mm2. The results were compared against Monte Carlo simulations. Results Dose differences caused by variations in background signal were effectively removed by the corrections applied. Measurements in homogeneous phantoms were used to empirically characterize beam data in homogeneous and heterogeneous media. Film measurements in inhomogeneous phantoms and their empirical parameterization differed by about 2%–3%. The model differed from MC by about 1% (water, lung) to 7% (bone). Good agreement was found for measured and modelled off-axis ratios. Conclusions EBT2 films are a valuable tool for characterization of narrow kV beams, though care must be taken to eliminate disturbances caused by varying background signals. The usefulness of the empirical beam model in interpretation and parameterization of film data was demonstrated. PMID:23510532

  17. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  18. Secondary use of empirical research data in medical ethics papers on gamete donation: forms of use and pitfalls.

    PubMed

    Provoost, Veerle

    2015-03-01

    This paper aims to provide a description of how authors publishing in medical ethics journals have made use of empirical research data in papers on the topic of gamete or embryo donation by means of references to studies conducted by others (secondary use). Rather than making a direct contribution to the theoretical methodological literature about the role empirical research data could play or should play in ethics studies, the focus is on the particular uses of these data and the problems that can be encountered with this use. In the selection of papers examined, apart from being used to describe the context, empirical evidence was mainly used to recount problems that needed solving. Few of the authors looked critically at the quality of the studies they quoted, and several instances were found of empirical data being used poorly or inappropriately. This study provides some initial baseline evidence that shows empirical data, in the form of references to studies, are sometimes being used in inappropriate ways. This suggests that medical ethicists should be more concerned about the quality of the empirical data selected, the appropriateness of the choice for a particular type of data (from a particular type of study) and the correct integration of this evidence in sound argumentation. Given that empirical data can be misused also when merely cited instead of reported, it may be worthwhile to explore good practice requirements for this type of use of empirical data in medical ethics.

  19. Reduction of atmospheric disturbances in PSInSAR measure technique based on ENVISAT ASAR data for Erta Ale Ridge

    NASA Astrophysics Data System (ADS)

    Kopeć, Anna

    2018-01-01

    The interferometric synthetic aperture radar (InSAR) is becoming more and more popular to investigate surface deformation, associated with volcanism, earthquakes, landslides, and post-mining surface subsidence. The measuring accuracy depends on many factors: surface, time and geometric decorrelation, orbit errors, however the largest challenges are the tropospheric delays. The spatial and temporal variations in temperature, pressure, and relative humidity are responsible for tropospheric delays. So far, many methods have been developed, but researchers are still searching for the one, that will allow to correct interferograms consistently in different regions and times. The article focuses on examining the methods based on empirical phase-based methods, spectrometer measurements and weather model. These methods were applied to the ENVISAT ASAR data for the Erta Ale Ridge in the Afar Depression, East Africa

  20. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    This paper presents a comparison of analysis and flight test data for a drone aircraft equipped with an active flutter suppression system. Emphasis is placed on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are presented for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. In addition to presenting the mathematical models and a brief description of existing analytical techniques, an alternative analytical technique for obtaining closed-loop results is presented.

  1. Calculation of evapotranspiration: Recursive and explicit methods

    USDA-ARS?s Scientific Manuscript database

    Crop yield is proportional to crop evapotranspiration (ETc) and it is important to calculate ETc correctly. Methods to calculate ETc have combined empirical and theoretical approaches. The combination method was used to calculate potential ETp. It is a combination method because it combined the ener...

  2. Development of an Empirical Local Magnitude Formula for Northern Oklahoma

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Karimi, S.; Moores, A. O.

    2015-12-01

    In this paper we focus on determining a local magnitude formula for northern Oklahoma that is unbiased with distance by empirically constraining the attenuation properties within the region of interest based on the amplitude of observed seismograms. For regional networks detecting events over several hundred kilometres, distance correction terms play an important role in determining the magnitude of an event. Standard distance correction terms such as Hutton and Boore (1987) may have a significant bias with distance if applied in a region with different attenuation properties, resulting in an incorrect magnitude. We have presented data from a regional network of broadband seismometers installed in bedrock in northern Oklahoma. The events with magnitude in the range of 2.0 and 4.5, distributed evenly across this network are considered. We find that existing models show a bias with respect to hypocentral distance. Observed amplitude measurements demonstrate that there is a significant Moho bounce effect that mandates the use of a trilinear attenuation model in order to avoid bias in the distance correction terms. We present two different approaches of local magnitude calibration. The first maintains the classic definition of local magnitude as proposed by Richter. The second method calibrates local magnitude so that it agrees with moment magnitude where a regional moment tensor can be computed. To this end, regional moment tensor solutions and moment magnitudes are computed for events with magnitude larger than 3.5 to allow calibration of local magnitude to moment magnitude. For both methods the new formula results in magnitudes systematically lower than previous values computed with Eaton's (1992) model. We compare the resulting magnitudes and discuss the benefits and drawbacks of each method. Our results highlight the importance of correct calibration of the distance correction terms for accurate local magnitude assessment in regional networks.

  3. Survey and analysis of research on supersonic drag-due-to-lift minimization with recommendations for wing design

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Mann, Michael J.

    1992-01-01

    A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.

  4. Direct Extraction of Tumor Response Based on Ensemble Empirical Mode Decomposition for Image Reconstruction of Early Breast Cancer Detection by UWB.

    PubMed

    Li, Qinwei; Xiao, Xia; Wang, Liang; Song, Hang; Kono, Hayato; Liu, Peifang; Lu, Hong; Kikkawa, Takamaro

    2015-10-01

    A direct extraction method of tumor response based on ensemble empirical mode decomposition (EEMD) is proposed for early breast cancer detection by ultra-wide band (UWB) microwave imaging. With this approach, the image reconstruction for the tumor detection can be realized with only extracted signals from as-detected waveforms. The calibration process executed in the previous research for obtaining reference waveforms which stand for signals detected from the tumor-free model is not required. The correctness of the method is testified by successfully detecting a 4 mm tumor located inside the glandular region in one breast model and by the model located at the interface between the gland and the fat, respectively. The reliability of the method is checked by distinguishing a tumor buried in the glandular tissue whose dielectric constant is 35. The feasibility of the method is confirmed by showing the correct tumor information in both simulation results and experimental results for the realistic 3-D printed breast phantom.

  5. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  6. Role of Statistical Random-Effects Linear Models in Personalized Medicine

    PubMed Central

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-01-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization. PMID:23467392

  7. Spatial analysis of temperature (BHT/DST) data and consequences for heat-flow determination in sedimentary basins

    USGS Publications Warehouse

    Forster, A.; Merriam, D.F.; Davis, J.C.

    1997-01-01

    Large numbers of bottom-hole temperatures (BHTs) and temperatures measured during drill-stem tests (DSTs) are available in areas explored for hydrocarbons, but their usefulness for estimating geothermal gradients and heat-flow density is limited. We investigated a large data set of BHT and DST measurements taken in boreholes in the American Midcontinent, a geologically uniform stable cratonic area, and propose an empirical correction for BHTs based on relationships between BHTs, DSTs, and thermal logs. This empirical correction is compared with similar approaches determined for other areas. The data were analyzed by multivariate statistics prior to the BHT correction to identify anomalous measurements and quantify external influences. Spatial patterns in temperature measurements for major stratigraphic units outline relations to regional structure. Comparision of temperature and structure trend-surface residuals reveals a relationship between temperature highs and local structure highs. The anticlines, developed by continuous but intermittent movement of basement fault blocks in the Late Paleozoic, are subtle features having closures of 10-30 m and contain relatively small hydrocarbon reservoirs. The temperature anomalies of the order of 5-7 ??C may reflect fluids moving upward along fractures and faults, rather than changes in thermal conductivity resulting from different pore fluids. ?? Springer-Verlag 1997.

  8. Spatial analysis of temperature (BHT/DST) data and consequences for heat-flow determination in sedimentary basins

    USGS Publications Warehouse

    Forster, A.; Merriam, D.F.; Davis, J.C.

    1997-01-01

    Large numbers of bottom-hole temperatures (BHTs) and temperatures measured during drill-stem tests (DSTs) are available in areas explored for hydrocarbons, but their usefulness for estimating geothermal gradients and heat-flow density is limited. We investigated a large data set of BHT and DST measurements taken in boreholes in the American Midcontinent, a geologically uniform stable cratonic area, and propose an empirical correction for BHTs based on relationships between BHTs, DSTs, and thermal logs. This empirical correction is compared with similar approaches determined for other areas. The data were analyzed by multivariate statistics prior to the BHT correction to identify anomalous measurements and quantify external influences. Spatial patterns in temperature measurements for major stratigraphic units outline relations to regional structure. Comparision of temperature and structure trend-surface residuals reveals a relationship between temperature highs and local structure highs. The anticlines, developed by continuous but intermittent movement of basement fault blocks in the Late Paleozoic, are subtle features having closures of 10-30 m and contain relatively small hydrocarbon reservoirs. The temperature anomalies of the order of 5-7??C may reflect fluids moving upward along fractures and faults, rather than changes in thermal conductivity resulting from different pore fluids.

  9. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  10. A sustainable building promotes pro-environmental behavior: an observational study on food disposal.

    PubMed

    Wu, David W-L; DiGiacomo, Alessandra; Kingstone, Alan

    2013-01-01

    In order to develop a more sustainable society, the wider public will need to increase engagement in pro-environmental behaviors. Psychological research on pro-environmental behaviors has thus far focused on identifying individual factors that promote such behavior, designing interventions based on these factors, and evaluating these interventions. Contextual factors that may also influence behavior at an aggregate level have been largely ignored. In the current study, we test a novel hypothesis--whether simply being in a sustainable building can elicit environmentally sustainable behavior. We find support for our hypothesis: people are significantly more likely to correctly choose the proper disposal bin (garbage, compost, recycling) in a building designed with sustainability in mind compared to a building that was not. Questionnaires reveal that these results are not due to self-selection biases. Our study provides empirical support that one's surroundings can have a profound and positive impact on behavior. It also suggests the opportunity for a new line of research that bridges psychology, design, and policy-making in an attempt to understand how the human environment can be designed and used as a subtle yet powerful tool to encourage and achieve aggregate pro-environmental behavior.

  11. A Sustainable Building Promotes Pro-Environmental Behavior: An Observational Study on Food Disposal

    PubMed Central

    Wu, David W.–L.; DiGiacomo, Alessandra; Kingstone, Alan

    2013-01-01

    In order to develop a more sustainable society, the wider public will need to increase engagement in pro-environmental behaviors. Psychological research on pro-environmental behaviors has thus far focused on identifying individual factors that promote such behavior, designing interventions based on these factors, and evaluating these interventions. Contextual factors that may also influence behavior at an aggregate level have been largely ignored. In the current study, we test a novel hypothesis – whether simply being in a sustainable building can elicit environmentally sustainable behavior. We find support for our hypothesis: people are significantly more likely to correctly choose the proper disposal bin (garbage, compost, recycling) in a building designed with sustainability in mind compared to a building that was not. Questionnaires reveal that these results are not due to self-selection biases. Our study provides empirical support that one's surroundings can have a profound and positive impact on behavior. It also suggests the opportunity for a new line of research that bridges psychology, design, and policy-making in an attempt to understand how the human environment can be designed and used as a subtle yet powerful tool to encourage and achieve aggregate pro-environmental behavior. PMID:23326521

  12. Endohedral gallide cluster superconductors and superconductivity in ReGa5.

    PubMed

    Xie, Weiwei; Luo, Huixia; Phelan, Brendan F; Klimczuk, Tomasz; Cevallos, Francois Alexandre; Cava, Robert Joseph

    2015-12-22

    We present transition metal-embedded (T@Gan) endohedral Ga-clusters as a favorable structural motif for superconductivity and develop empirical, molecule-based, electron counting rules that govern the hierarchical architectures that the clusters assume in binary phases. Among the binary T@Gan endohedral cluster systems, Mo8Ga41, Mo6Ga31, Rh2Ga9, and Ir2Ga9 are all previously known superconductors. The well-known exotic superconductor PuCoGa5 and related phases are also members of this endohedral gallide cluster family. We show that electron-deficient compounds like Mo8Ga41 prefer architectures with vertex-sharing gallium clusters, whereas electron-rich compounds, like PdGa5, prefer edge-sharing cluster architectures. The superconducting transition temperatures are highest for the electron-poor, corner-sharing architectures. Based on this analysis, the previously unknown endohedral cluster compound ReGa5 is postulated to exist at an intermediate electron count and a mix of corner sharing and edge sharing cluster architectures. The empirical prediction is shown to be correct and leads to the discovery of superconductivity in ReGa5. The Fermi levels for endohedral gallide cluster compounds are located in deep pseudogaps in the electronic densities of states, an important factor in determining their chemical stability, while at the same time limiting their superconducting transition temperatures.

  13. Cospatial Longslit UV-Optical Spectra of Ten Galactic Planetary Nebulae with HST STIS: Description of observations, global emission-line measurements, and empirical CNO abundances

    NASA Astrophysics Data System (ADS)

    Dufour, R. J.; Kwitter, K. B.; Shaw, R. A.; Balick, B.; Henry, R. B. C.; Miller, T. R.; Corradi, R. L. M.

    2015-01-01

    This poster describes details of HST Cycle 19 (program GO 12600), which was awarded 32 orbits of observing time with STIS to obtain the first cospatial UV-optical spectra of 10 Galactic planetary nebulae (PNe). The observational goal was to measure the UV emission lines of carbon and nitrogen with unprecedented S/N and wavelength and spatial resolution along the disk of each object over a wavelength range 1150-10270 Ang . The PNe were chosen such that each possessed a near-solar metallicity but the group together spanned a broad range in N/O. This poster concentrates on describing the observations, emission-line measurements integrated along the entire slit lengths, ionic abundances, and estimated total elemental abundances using empirical ionization correction factors and the ELSA code. Related posters by co-authors in this session concentrate on analyzing CNO abundances, progenitor masses and nebular properties of the best-observed targets using photoionization modeling of the global emission-line measurements [Henry et al.] or detailed analyses of spatial variations in electron temperatures, densities, and abundances along the sub arcsecond resolution slits [Miller et al. & Shaw et al.]. We gratefully acknowledge AURA/STScI for the GO 12600 program support, both observational and financial.

  14. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    PubMed

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  15. Communications in public health emergency preparedness: a systematic review of the literature.

    PubMed

    Savoia, Elena; Lin, Leesa; Viswanath, Kasisomayajula

    2013-09-01

    During a public health crisis, public health agencies engage in a variety of public communication efforts to inform the population, encourage the adoption of preventive behaviors, and limit the impact of adverse events. Given the importance of communication to the public in public health emergency preparedness, it is critical to examine the extent to which this field of study has received attention from the scientific community. We conducted a systematic literature review to describe current research in the area of communication to the public in public health emergency preparedness, focusing on the association between sociodemographic and behavioral factors and communication as well as preparedness outcomes. Articles were searched in PubMed and Embase and reviewed by 2 independent reviewers. A total of 131 articles were included for final review. Fifty-three percent of the articles were empirical, of which 74% were population-based studies, and 26% used information environment analysis techniques. None had an experimental study design. Population-based studies were rarely supported by theoretical models and mostly relied on a cross-sectional study design. Consistent results were reported on the association between population socioeconomic factors and public health emergency preparedness communication and preparedness outcomes. Our findings show the need for empirical research to determine what type of communication messages can be effective in achieving preparedness outcomes across various population groups. They suggest that a real-time analysis of the information environment is valuable in knowing what is being communicated to the public and could be used for course correction of public health messages during a crisis.

  16. Communications in Public Health Emergency Preparedness: A Systematic Review of the Literature

    PubMed Central

    Savoia, Elena; Viswanath, Kasisomayajula

    2013-01-01

    During a public health crisis, public health agencies engage in a variety of public communication efforts to inform the population, encourage the adoption of preventive behaviors, and limit the impact of adverse events. Given the importance of communication to the public in public health emergency preparedness, it is critical to examine the extent to which this field of study has received attention from the scientific community. We conducted a systematic literature review to describe current research in the area of communication to the public in public health emergency preparedness, focusing on the association between sociodemographic and behavioral factors and communication as well as preparedness outcomes. Articles were searched in PubMed and Embase and reviewed by 2 independent reviewers. A total of 131 articles were included for final review. Fifty-three percent of the articles were empirical, of which 74% were population-based studies, and 26% used information environment analysis techniques. None had an experimental study design. Population-based studies were rarely supported by theoretical models and mostly relied on a cross-sectional study design. Consistent results were reported on the association between population socioeconomic factors and public health emergency preparedness communication and preparedness outcomes. Our findings show the need for empirical research to determine what type of communication messages can be effective in achieving preparedness outcomes across various population groups. They suggest that a real-time analysis of the information environment is valuable in knowing what is being communicated to the public and could be used for course correction of public health messages during a crisis. PMID:24041193

  17. A new twist on an old regression: transfer of chemicals to beef and milk in human and ecological risk assessment.

    PubMed

    Hendriks, A Jan; Smítková, Hana; Huijbregts, Mark A J

    2007-11-01

    Exposure of humans to chemicals in beef or milk is part of almost all risk evaluation procedures carried out to reduce emissions or to remediate sites. Concentrations of substances in these livestock products are often estimated using log-log regressions that relate the biotransfer factor BTF to the octanol-water partition ratio K(ow). However, the correctness of these empirical correlations has been questioned. Here, we compare them to the mechanistic model OMEGA that describes the distribution of substances in organisms by integrating theory on chemical fugacity and biological allometry. OMEGA has been calibrated and validated on thousands of laboratory and field data, reflecting many chemical substances and biological species. Overall fluxes of water, food, tissue (growth), milk and stable substances calculated by OMEGA are within a factor of two from independent data obtained in experiments. Rate constants measured for elimination of individual compounds of a recalcitrant nature vary around the level expected from the model for output to faeces and milk. Both data and model suggest that biotransfer BTF of stable substances to beef and milk is independent of the octanol-water partition ratio K(ow) in the range of 10(3)-10(6). This contradicts empirical regressions including stable and labile compounds. As expected, levels of labile substances vary widely around a tentative indication derived from the model. Transformation and accumulation of labile substances remains highly specific for the chemical and organism concerned but depends weakly on the octanol-water partition ratio K(ow). Several possibilities for additional refinement are identified.

  18. Identification of Unexpressed Premises and Argumentation Schemes by Students in Secondary School.

    ERIC Educational Resources Information Center

    van Eemeren, Frans H.; And Others

    1995-01-01

    Reports on exploratory empirical investigations on the performances of Dutch secondary education students in identifying unexpressed premises and argumentation schemes. Finds that, in the absence of any disambiguating contextual information, unexpressed major premises and non-syllogistic premises are more often correctly identified that…

  19. Exploring the limit of accuracy for density functionals based on the generalized gradient approximation: Local, global hybrid, and range-separated hybrid functionals with and without dispersion corrections

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2014-03-25

    The limit of accuracy for semi-empirical generalized gradient approximation (GGA) density functionals is explored in this paper by parameterizing a variety of local, global hybrid, and range-separated hybrid functionals. The training methodology employed differs from conventional approaches in 2 main ways: (1) Instead of uniformly truncating the exchange, same-spin correlation, and opposite-spin correlation functional inhomogeneity correction factors, all possible fits up to fourth order are considered, and (2) Instead of selecting the optimal functionals based solely on their training set performance, the fits are validated on an independent test set and ranked based on their overall performance on the trainingmore » and test sets. The 3 different methods of accounting for exchange are trained both with and without dispersion corrections (DFT-D2 and VV10), resulting in a total of 491 508 candidate functionals. For each of the 9 functional classes considered, the results illustrate the trade-off between improved training set performance and diminished transferability. Since all 491 508 functionals are uniformly trained and tested, this methodology allows the relative strengths of each type of functional to be consistently compared and contrasted. Finally, the range-separated hybrid GGA functional paired with the VV10 nonlocal correlation functional emerges as the most accurate form for the present training and test sets, which span thermochemical energy differences, reaction barriers, and intermolecular interactions involving lighter main group elements.« less

  20. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  1. Ambient dose equivalent and effective dose from scattered x-ray spectra in mammography for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations.

    PubMed

    Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S

    2006-04-21

    In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.

  2. Distress Tolerance and Psychopathological Symptoms and Disorders: A Review of the Empirical Literature among Adults

    PubMed Central

    Leyro, Teresa M.; Zvolensky, Michael J.; Bernstein, Amit

    2010-01-01

    In the present paper, we review theory and empirical study of distress tolerance, an emerging risk factor candidate for various forms of psychopathology. Despite the long-standing interest in, and promise of work on, distress tolerance for understanding adult psychopathology, there has not been a comprehensive review of the extant empirical literature focused on the construct. As a result, a comprehensive synthesis of theoretical and empirical scholarship on distress tolerance including integration of extant research on the relations between distress tolerance and psychopathology is lacking. Inspection of the scientific literature indicates that there are a number of promising ways to conceptualize and measure distress tolerance, as well as documented relations between distress tolerance factor(s) and psychopathological symptoms and disorders. Although promising, there also is notable conceptual and operational heterogeneity across the distress tolerance literature(s). Moreoever, a number of basic questions remain unanswered regarding the associations between distress tolerance and other risk and protective factors and processes, as well as its putative role(s) in vulnerability for, and resilience to, psychopathology. Thus, the current paper provides a comprehensive review of past and contemporary theory and research and proposes key areas for future empirical study of this construct. PMID:20565169

  3. "Falsifiability is not optional": Correction to LeBel et al. (2017).

    PubMed

    2017-11-01

    Reports an error in "Falsifiability is not optional" by Etienne P. LeBel, Derek Berger, Lorne Campbell and Timothy J. Loving ( Journal of Personality and Social Psychology , 2017[Aug], Vol 113[2], 254-261). In the reply, there were two errors in the References list. The publishing year for the 14th and 21st articles was cited incorrectly as 2016. The in-text acronym associated with these citations should read instead as FER2017 and LCL2017. The correct References list citations should read as follows, respectively: Finkel, E. J., Eastwick, P. W., & Reis, H. T. (2017). Replicability and other features of a high-quality science: Toward a balanced and empirical approach. Journal of Personality and Social Psychology , 113, 244-253. http://dx.doi.org/10.1037/pspi0000075 LeBel, E. P., Campbell, L., & Loving, T. J. (2017). Benefits of open and high-powered research outweigh costs. Journal of Personality and Social Psychology , 113, 230-243. http://dx.doi.org/10 .1037/pspi0000049. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2017-30567-003.) Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016's position that "bigger samples are generally better, but . . . that very large samples could have the downside of commandeering resources that would have been better invested in other studies" (abstract). We identify problematic assumptions involved in FER2016's modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given that we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Essays on the Role of Noncognitive Skills in Decision-Making

    ERIC Educational Resources Information Center

    McGee, Andrew Dunstan

    2010-01-01

    While "ability" has long featured prominently in economic models and empirical studies of labor markets, economists have only recently begun to consider how personality and attitudes--noncognitive factors--influence behavior both from a theoretical and empirical standpoint. This dissertation incorporates noncognitive factors into…

  5. Photon Recollision Probability: a Useful Concept for Cross Scale Consistency Check between Leaf Area Index and Foliage Clumping Products

    NASA Astrophysics Data System (ADS)

    Pisek, J.

    2017-12-01

    Clumping index (CI) is the measure of foliage aggregation relative to a random distribution of leaves in space. CI is an important factor for the correct quantification of true leaf area index (LAI). Global and regional scale CI maps have been generated from various multi-angle sensors based on an empirical relationship with the normalized difference between hotspot and darkspot (NDHD) index (Chen et al., 2005). Ryu et al. (2011) suggested that accurate calculation of radiative transfer in a canopy, important for controlling gross primary productivity (GPP) and evapotranspiration (ET) (Baldocchi and Harley, 1995), should be possible by integrating CI with incoming solar irradiance and LAI from MODIS land and atmosphere products. It should be noted that MODIS LAI/FPAR product uses internal non-empirical, stochastic equations for parameterization of foliage clumping. This raises a question if integration of the MODIS LAI product with empirically-based CI maps does not introduce any inconsistencies. Here, the consistency is examined independently through the `recollision probability theory' or `p-theory' (Knyazikhin et al., 1998) along with raw LAI-2000/2200 Plant Canopy Analyzer (PCA) data from > 30 sites, surveyed across a range of vegetation types. The theory predicts that the amount of radiation scattered by a canopy should depend only on the wavelength and the spectrally invariant canopy structural parameter p. The parameter p is linked to the foliage clumping (Stenberg et al., 2016). Results indicate that integration of the MODIS LAI product with empirically-based CI maps is feasible. Importantly, for the first time it is shown that it is possible to obtain p values for any location solely from Earth Observation data. This is very relevant for future applications of photon recollision probability concept for global and local monitoring of vegetation using Earth Observation data.

  6. Sensitivity Analysis of Empirical Results on Civil War Onset

    ERIC Educational Resources Information Center

    Hegre, Havard; Sambanis, Nicholas

    2006-01-01

    In the literature on civil war onset, several empirical results are not robust or replicable across studies. Studies use different definitions of civil war and analyze different time periods, so readers cannot easily determine if differences in empirical results are due to those factors or if most empirical results are just not robust. The authors…

  7. Why Sex Based Language Differences are Elusive.

    ERIC Educational Resources Information Center

    Tyler, Mary

    Paradoxically, linguists' speculations about sex differences in language use are highly plausible and yet have received little empirical support from well controlled studies. An experiment was designed to correct a flaw in earlier methodologies by sampling precisely the kinds of situations in which predicted differences (e.g., swearing,…

  8. Insurance within the Firm

    ERIC Educational Resources Information Center

    Guiso, Luigi; Pistaferri, Luigi; Schivardi, Fabiano

    2005-01-01

    We evaluate the allocation of risk between firms and their workers using matched employer-employee panel data. Unlike previous contributions, this paper focuses on idiosyncratic shocks to the firm, which are the correct empirical counterpart of the theoretical notion of diversifiable risk. We allow for both temporary and permanent shocks to output…

  9. Resistivity of liquid metals on Veljkovic-Slavic pseudopotential

    NASA Astrophysics Data System (ADS)

    Abdel-Azez, Khalef

    1996-04-01

    An empirical form of screened model pseudopotential, proposed by Veljkovic and Slavic, is exploited for the calculation of resistivity of seven liquid metals through the correct re- determination of its parameters. The model derives qualitative support from the close agreement obtained between the computed results and the experiment.

  10. Identification of metastable states in peptide's dynamics

    NASA Astrophysics Data System (ADS)

    Ruzhytska, Svitlana; Jacobi, Martin Nilsson; Jensen, Christian H.; Nerukh, Dmitry

    2010-10-01

    A recently developed spectral method for identifying metastable states in Markov chains is used to analyze the conformational dynamics of a four-residue peptide valine-proline-alanine-leucine. We compare our results to empirically defined conformational states and show that the found metastable states correctly reproduce the conformational dynamics of the system.

  11. Reasoning in Young Children: Fantasy and Information Retrieval.

    ERIC Educational Resources Information Center

    Markovits, Henry; And Others

    1996-01-01

    A model of conditional reasoning predicted that children under 12 would respond correctly to questions of uncertain logical form if premises and context enabled them to access counterexamples from memory, and that children's performance with uncertain logical forms would decrease when empirically true premises are presented in a fantasy context.…

  12. Geometry-dependent penetration fields of superconducting Bi2Sr2CaCu2O8+δ platelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curran, P. J.; Clem, J. R.; Bending, S. J.

    Magneto-optical imaging has been used to study vortex penetration into regular polygon-shaped Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8+{delta}} platelets with various geometries (disks, pentagons, squares, and triangles) but known fixed areas. In all cases we observe an exponential dependence of the field of first penetration, H{sub p}, on temperature, consistent with a dominant Bean-Livingston barrier for pancake vortices at our measurement temperatures (45-80 K). However, the penetration field consistently decreases with decreasing degree of sample symmetry, in stark contrast to conventional estimates of demagnetization factors using equivalent ellipsoids based on inscribed circles, which predict the reverse trend. Surprisingly, this observation doesmore » not appear to have been reported in the literature before. We demonstrate empirically that estimates using equivalent ellipsoids based on circumscribed circles predict the correct qualitative experimental trend in Hp. Our work has important implications for the estimation of appropriate effective demagnetization factors for flux penetration into arbitrarily shaped superconducting bodies.« less

  13. Geometry-dependent penetration fields in superconducting Bi2Sr2CaCu2O8+δ platelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    By: Curran, P. J.; Clem, J. R.; Bending, S. J.

    Magneto-optical imaging has been used to study vortex penetration into regular polygon-shaped Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8+{delta}} platelets with various geometries (disks, pentagons, squares, and triangles) but known fixed areas. In all cases we observe an exponential dependence of the field of first penetration, H{sub p}, on temperature, consistent with a dominant Bean-Livingston barrier for pancake vortices at our measurement temperatures (45-80 K). However, the penetration field consistently decreases with decreasing degree of sample symmetry, in stark contrast to conventional estimates of demagnetization factors using equivalent ellipsoids based on inscribed circles, which predict the reverse trend. Surprisingly, this observation doesmore » not appear to have been reported in the literature before. We demonstrate empirically that estimates using equivalent ellipsoids based on circumscribed circles predict the correct qualitative experimental trend in H{sub p}. Our work has important implications for the estimation of appropriate effective demagnetization factors for flux penetration into arbitrarily shaped superconducting bodies.« less

  14. An Empirical Analysis of the Default Rate of Informal Lending—Evidence from Yiwu, China

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Yu, Xiaobo; Du, Juan; Ji, Feng

    This study empirically analyzes the underlying factors contributing to the default rate of informal lending. This paper adopts snowball sampling interview to collect data and uses the logistic regression model to explore the specific factors. The results of these analyses validate the explanation of how the informal lending differs from the commercial loan. Factors that contribute to the default rate have particular attributes, while sharing some similarities with commercial bank or FICO credit scoring Index. Finally, our concluding remarks draw some inferences from empirical analysis and speculate as to what this may imply for the role of formal and informal financial sectors.

  15. Development of Alabama traffic factors for use in mechanistic-empirical pavement design.

    DOT National Transportation Integrated Search

    2015-02-01

    The pavement engineering community is moving toward design practices that use mechanistic-empirical (M-E) approaches to the design and analysis of pavement structures. This effort is : embodied in the Mechanistic-Empirical Pavement Design Guide (MEPD...

  16. Oxygen isotope corrections for online δ34S analysis

    USGS Publications Warehouse

    Fry, B.; Silva, S.R.; Kendall, C.; Anderson, R.K.

    2002-01-01

    Elemental analyzers have been successfully coupled to stable-isotope-ratio mass spectrometers for online measurements of the δ34S isotopic composition of plants, animals and soils. We found that the online technology for automated δ34S isotopic determinations did not yield reproducible oxygen isotopic compositions in the SO2 produced, and as a result calculated δ34S values were often 1–3‰ too high versus their correct values, particularly for plant and animal samples with high C/S ratio. Here we provide empirical and analytical methods for correcting the S isotope values for oxygen isotope variations, and further detail a new SO2-SiO2 buffering method that minimizes detrimental oxygen isotope variations in SO2.

  17. Correcting C IV-based virial black hole masses

    NASA Astrophysics Data System (ADS)

    Coatman, Liam; Hewett, Paul C.; Banerji, Manda; Richards, Gordon T.; Hennawi, Joseph F.; Prochaska, J. Xavier

    2017-02-01

    The C IVλλ1498,1501 broad emission line is visible in optical spectra to redshifts exceeding z ˜ 5. C IV has long been known to exhibit significant displacements to the blue and these `blueshifts' almost certainly signal the presence of strong outflows. As a consequence, single-epoch virial black hole (BH) mass estimates derived from C IV velocity widths are known to be systematically biased compared to masses from the hydrogen Balmer lines. Using a large sample of 230 high-luminosity (LBol = 1045.5-1048 erg s-1), redshift 1.5 < z < 4.0 quasars with both C IV and Balmer line spectra, we have quantified the bias in C IV BH masses as a function of the C IV blueshift. C IV BH masses are shown to be a factor of 5 larger than the corresponding Balmer-line masses at C IV blueshifts of 3000 km s-1and are overestimated by almost an order of magnitude at the most extreme blueshifts, ≳5000 km s-1. Using the monotonically increasing relationship between the C IV blueshift and the mass ratio BH(C IV)/BH(Hα), we derive an empirical correction to all C IV BH masses. The scatter between the corrected C IV masses and the Balmer masses is 0.24 dex at low C IV blueshifts (˜0 km s-1) and just 0.10 dex at high blueshifts (˜3000 km s-1), compared to 0.40 dex before the correction. The correction depends only on the C IV line properties - i.e. full width at half-maximum and blueshift - and can therefore be applied to all quasars where C IV emission line properties have been measured, enabling the derivation of unbiased virial BH-mass estimates for the majority of high-luminosity, high-redshift, spectroscopically confirmed quasars in the literature.

  18. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  19. Apparent multifractality of self-similar Lévy processes

    NASA Astrophysics Data System (ADS)

    Zamparo, Marco

    2017-07-01

    Scaling properties of time series are usually studied in terms of the scaling laws of empirical moments, which are the time average estimates of moments of the dynamic variable. Nonlinearities in the scaling function of empirical moments are generally regarded as a sign of multifractality in the data. We show that, except for the Brownian motion, this method fails to disclose the correct monofractal nature of self-similar Lévy processes. We prove that for this class of processes it produces apparent multifractality characterised by a piecewise-linear scaling function with two different regimes, which match at the stability index of the considered process. This result is motivated by previous numerical evidence. It is obtained by introducing an appropriate stochastic normalisation which is able to cure empirical moments, without hiding their dependence on time, when moments they aim at estimating do not exist.

  20. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  1. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  2. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  3. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  4. Semi-empirical and phenomenological instrument functions for the scanning tunneling microscope

    NASA Astrophysics Data System (ADS)

    Feuchtwang, T. E.; Cutler, P. H.; Notea, A.

    1988-08-01

    Recent progress in the development of a convenient algorithm for the determination of a quantitative local density of states (LDOS) of the sample, from data measured in the STM, is reviewd. It is argued that the sample LDOS strikes a good balance between the information content of a surface characteristic and effort required to obtain it experimentally. Hence, procedures to determine the sample LDOS as directly and as tip-model independently as possible are emphasized. The solution of the STM's "inverse" problem in terms of novel versions of the instrument (or Green) function technique is considered in preference to the well known, more direct solutions. Two types of instrument functions are considered: Approximations of the basic tip-instrument function obtained from the transfer Hamiltonian theory of the STM-STS. And, phenomenological instrument functions devised as a systematic scheme for semi-empirical first order corrections of "ideal" models. The instrument function, in this case, describes the corrections as the response of an independent component of the measuring apparatus inserted between the "ideal" instrument and the measured data. This linear response theory of measurement is reviewed and applied. A procedure for the estimation of the consistency of the model and the systematic errors due to the use of an approximate instrument function is presented. The independence of the instrument function techniques from explicit microscopic models of the tip is noted. The need for semi-empirical, as opposed to strictly empirical or analytical determination of the instrument function is discussed. The extension of the theory to the scanning tunneling spectrometer is noted, as well as its use in a theory of resolution.

  5. Floristic composition and across-track reflectance gradient in Landsat images over Amazonian forests

    NASA Astrophysics Data System (ADS)

    Muro, Javier; doninck, Jasper Van; Tuomisto, Hanna; Higgins, Mark A.; Moulatlet, Gabriel M.; Ruokolainen, Kalle

    2016-09-01

    Remotely sensed image interpretation or classification of tropical forests can be severely hampered by the effects of the bidirectional reflection distribution function (BRDF). Even for narrow swath sensors like Landsat TM/ETM+, the influence of reflectance anisotropy can be sufficiently strong to introduce a cross-track reflectance gradient. If the BRDF could be assumed to be linear for the limited swath of Landsat, it would be possible to remove this gradient during image preprocessing using a simple empirical method. However, the existence of natural gradients in reflectance caused by spatial variation in floristic composition of the forest can restrict the applicability of such simple corrections. Here we use floristic information over Peruvian and Brazilian Amazonia acquired through field surveys, complemented with information from geological maps, to investigate the interaction of real floristic gradients and the effect of reflectance anisotropy on the observed reflectances in Landsat data. In addition, we test the assumption of linearity of the BRDF for a limited swath width, and whether different primary non-inundated forest types are characterized by different magnitudes of the directional reflectance gradient. Our results show that a linear function is adequate to empirically correct for view angle effects, and that the magnitude of the across-track reflectance gradient is independent of floristic composition in the non-inundated forests we studied. This makes a routine correction of view angle effects possible. However, floristic variation complicates the issue, because different forest types have different mean reflectances. This must be taken into account when deriving the correction function in order to avoid eliminating natural gradients.

  6. Universal relations for range corrections to Efimov features

    DOE PAGES

    Ji, Chen; Braaten, Eric; Phillips, Daniel R.; ...

    2015-09-09

    In a three-body system of identical bosons interacting through a large S-wave scattering length a, there are several sets of features related to the Efimov effect that are characterized by discrete scale invariance. Effective field theory was recently used to derive universal relations between these Efimov features that include the first-order correction due to a nonzero effective range r s. We reveal a simple pattern in these range corrections that had not been previously identified. The pattern is explained by the renormalization group for the effective field theory, which implies that the Efimov three-body parameter runs logarithmically with the momentummore » scale at a rate proportional to r s/a. The running Efimov parameter also explains the empirical observation that range corrections can be largely taken into account by shifting the Efimov parameter by an adjustable parameter divided by a. Furthermore, the accuracy of universal relations that include first-order range corrections is verified by comparing them with various theoretical calculations using models with nonzero range.« less

  7. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices

    PubMed Central

    Westgate, Philip M.

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539

  8. Isotopic fractionation studies of uranium and plutonium using porous ion emitters as thermal ionization mass spectrometry sources

    DOE PAGES

    Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.; ...

    2018-04-22

    Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less

  9. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    PubMed

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  10. Isotopic fractionation studies of uranium and plutonium using porous ion emitters as thermal ionization mass spectrometry sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.

    Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less

  11. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  12. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    NASA Astrophysics Data System (ADS)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  13. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    PubMed

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-05

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.

  14. On the Sophistication of Naïve Empirical Reasoning: Factors Influencing Mathematicians' Persuasion Ratings of Empirical Arguments

    ERIC Educational Resources Information Center

    Weber, Keith

    2013-01-01

    This paper presents the results of an experiment in which mathematicians were asked to rate how persuasive they found two empirical arguments. There were three key results from this study: (a) Participants judged an empirical argument as more persuasive if it verified that integers possessed an infrequent property than if it verified that integers…

  15. Empirical Bases for a Prekindergarten Curriculum for Disadvantaged Children.

    ERIC Educational Resources Information Center

    Di Lorenzo, Louis T.; And Others

    This project was undertaken to establish a basis for a compensatory curriculum for disadvantaged preschool children by using existing empirical data to identify factors that predict success in reading comprehension and that differentiate the disadvantaged from the nondisadvantaged. The project focused on factors related to success in learning to…

  16. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    PubMed

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  17. Asymptotics of empirical eigenstructure for high dimensional spiked covariance

    PubMed Central

    Wang, Weichen

    2017-01-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726

  18. Effects of miso- and mesoscale obstructions on PAM winds obtained during project NIMROD. [Portable Automated Mesonet

    NASA Technical Reports Server (NTRS)

    Fujita, T. T.; Wakimoto, R. M.

    1982-01-01

    Data from 27 PAM (Portable Automated Mesonet) stations, operational as a phase of project NIMROD (Northern Illinois Meteorological Research on Downburst), are presented. It was found that PAM-measured winds are influenced by the mesoscale obstruction of the Chicago metropolitan area, as well as by the misoscale obstruction of identified trees and buildings. The mesoscale obstruction was estimated within the range of near zero to 50%, increasing toward the city limits, while the misoscale obstruction was estimated as being as large as 58% near obstructing trees which were empirically calculated to cause a wind speed deficit 50-80 times their height. Despite a statistical analysis based on one-million PAM winds, wind speed and stability transmission factors could not be accurately calculated; thus, in order to calculate the airflow free from obstacle, PAM-measured winds must be corrected.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyewon; Cheong, S.W.; Kim, Bog G., E-mail: boggikim@pusan.ac.kr

    We have studied the properties of SnO{sub 6} octahedra-containing perovskites and their derived structures using ab initio calculations with different density functionals. In order to predict the correct band gap of the materials, we have used B3LYP hybrid density functional, and the results of B3LYP were compared with those obtained using the local density approximation and generalized gradient approximation data. The calculations have been conducted for the orthorhombic ground state of the SnO{sub 6} containing perovskites. We also have expended the hybrid density functional calculation to the ASnO{sub 3}/A'SnO{sub 3} system with different cation orderings. We propose an empirical relationshipmore » between the tolerance factor and the band gap of SnO{sub 6} containing oxide materials based on first principles calculation. - Graphical abstract: (a) Structure of ASnO{sub 3} for orthorhombic ground state. The green ball is A (Ba, Sr, Ca) cation and the small (red) ball on edge is oxygen. SnO{sub 6} octahedrons are plotted as polyhedron. (b) Band gap of ASnO{sub 3} as a function of the tolerance factor for different density functionals. The experimental values of the band gap are marked as green pentagons. (c) ASnO{sub 3}/A'SnO{sub 3} superlattices with two types cation arrangement: [001] layered structure and [111] rocksalt structure, respectively. (d) B3LYP hybrid functional band gaps of ASnO{sub 3}, [001] ordered superlattices, and [111] ordered superlattices of ASnO{sub 3}/A'SnO{sub 3} as a function of the effective tolerance factor. Note the empirical linear relationship between the band gap and effective tolerance factor. - Highlights: • We report the hybrid functional band gap calculation of ASnO{sub 3} and ASnO{sub 3}/A'SnO{sub 3}. • The band gap of ASnO{sub 3} using B3LYP functional reproduces the experimental value. • We propose the linear relationship between the tolerance factor and the band gap.« less

  20. Panel data models with spatial correlation: Estimation theory and an empirical investigation of the United States wholesale gasoline industry

    NASA Astrophysics Data System (ADS)

    Kapoor, Mudit

    The first part of my dissertation considers the estimation of a panel data model with error components that are both spatially and time-wise correlated. The dissertation combines widely used model for spatial correlation (Cliff and Ord (1973, 1981)) with the classical error component panel data model. I introduce generalizations of the generalized moments (GM) procedure suggested in Kelejian and Prucha (1999) for estimating the spatial autoregressive parameter in case of a single cross section. I then use those estimators to define feasible generalized least squares (GLS) procedures for the regression parameters. I give formal large sample results concerning the consistency of the proposed GM procedures, as well as the consistency and asymptotic normality of the proposed feasible GLS procedures. The new estimators remain computationally feasible even in large samples. The second part of my dissertation employs a Cliff-Ord-type model to empirically estimate the nature and extent of price competition in the US wholesale gasoline industry. I use data on average weekly wholesale gasoline price for 289 terminals (distribution facilities) in the US. Data on demand factors, cost factors and market structure that affect price are also used. I consider two time periods, a high demand period (August 1999) and a low demand period (January 2000). I find a high level of competition in prices between neighboring terminals. In particular, price in one terminal is significantly and positively correlated to the price of its neighboring terminal. Moreover, I find this to be much higher during the low demand period, as compared to the high demand period. In contrast to previous work, I include for each terminal the characteristics of the marginal customer by controlling for demand factors in the neighboring location. I find these demand factors to be important during period of high demand and insignificant during the low demand period. Furthermore, I have also considered spatial correlation in unobserved factors that affect price. I find it to be high and significant only during the low demand period. Not correcting for it leads to incorrect inferences regarding exogenous explanatory variables.

  1. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  2. New CNO Elemental Abundances in Planetary Nebulae from Spatially Resolved UV/Optical Emission Lines

    NASA Astrophysics Data System (ADS)

    Shaw, Richard A.; Kwitter, Karen B.; Henry, Richard B. C.; Dufour, Reginald J.; Balick, Bruce; Corradi, Romano

    2015-01-01

    We obtained HST/STIS long-slit spectra spanning 0.11 to 1.1 μm of co-spatial regions in 10 Galactic planetary nebulae (Dufour, et al., this conference), of which six present substantial changes in ionization with position. Under the assumption that elemental abundances are constant within these nebulae (but exterior to the wind of the central star), these spectra present a unique opportunity to examine the applicability of common ionization correction factors (ICFs) for deriving abundances. ICFs are the most common direct method in abundance analysis for accounting for unobserved or undetected ionization stages in nebulae, yet most ICF recipes have not been rigorously examined through modeling nor empirically tested through observation. In this preliminary study, we focussed on the astrophysically important abundances of C and N where strong ionic transitions are scarce in optical band, but plentiful in the satellite UV. We derived physical diagnostics (extinction, Te, Ne) and ionic abundances for the species of interest at various positions along the slit for each PN. We compared the elemental abundances derived from direct summation of the ionic abundances in the UV and optical to those derived using only optical emission, but corrected using standard ICFs. We found that the abundances were usually in good agreement, but there were significant exceptions. We also found that setting upper limits on emission from undetected ions was sometimes helpful in constraining the correction factors. Work is underway to construct photoionization models of these nebulae (see Miller, et al., this conference) to address the question of why ICFs are sometimes inaccurate, and to explore other ICF recipes for those cases.Support for Program number GO-12600 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.

  3. Factors associated with suitability of empiric antibiotic therapy in hospitalized patients with bloodstream infections.

    PubMed

    Grossman, Chagai; Keller, Nathan; Bornstein, Gil; Ben-Zvi, Ilan; Koren-Morag, Nira; Rahav, Galia

    2017-06-01

    Bacteremia is associated with high morbidity and mortality rates. Initiation of inadequate empiric antibiotic therapy is associated with a worse outcome. The aim of this study was to establish the prevalence and the factors associated with inappropriate empiric antibiotic therapy in patients hospitalized with bacteremia. A cross-sectional study was conducted during January 2010-December 2011 at the medical wards of the Chaim Sheba Medical Center, Israel. The records of all patients with bacteremia were reviewed. Clinical and laboratory characteristics, bacteremic pathogens and antimicrobial agents were retrieved from the medical records. Factors associated with appropriateness of empiric antibiotic therapy were assessed. A total of 681 eligible adults were included in the study. Antibiotic therapy was found to be inappropriate in 138 (20.2%) patients (95% C.I. 17.2-23.2). The rate of appropriateness was not related to the type of antibiotic regimen and the type of bacteria. Patients with healthcare-associated infections were more likely to be administrated inappropriate antibiotic therapy. Patients with primary bloodstream infections were also more likely to be administrated inappropriate antibiotic therapy. Empiric combination therapy was more likely to be appropriate than monotherapy, except for an aminoglycosides-based combination. Combination empiric antibiotic therapy should be considered in patients with healthcare-associated infections and in those with primary bloodstream infections.

  4. Amended Results for Hard X-Ray Emission by Non-thermal Thick Target Recombination in Solar Flares

    NASA Astrophysics Data System (ADS)

    Reep, J. W.; Brown, J. C.

    2016-06-01

    Brown & Mallik and the corresponding corrigendum Brown et al. presented expressions for non-thermal recombination (NTR) in the collisionally thin- and thick-target regimes, claiming that the process could account for a substantial part of the hard X-ray continuum in solar flares usually attributed entirely to thermal and non-thermal bremsstrahlung (NTB). However, we have found the thick-target expression to become unphysical for low cut-offs in the injected electron energy spectrum. We trace this to an error in the derivation, derive a corrected version that is real-valued and continuous for all photon energies and cut-offs, and show that, for thick targets, Brown et al. overestimated NTR emission at small photon energies. The regime of small cut-offs and large spectral indices involve large (reducing) correction factors but in some other thick-target parameter regimes NTR/NTB can still be of the order of unity. We comment on the importance of these results to flare and microflare modeling and spectral fitting. An empirical fit to our results shows that the peak NTR contribution comprises over half of the hard X-ray signal if δ ≳ 6{≤ft(\\tfrac{{E}0c}{4{keV}}\\right)}0.4.

  5. Brain correlates of the intrinsic subjective cost of effort in sedentary volunteers.

    PubMed

    Bernacer, J; Martinez-Valbuena, I; Martinez, M; Pujol, N; Luis, E; Ramirez-Castillo, D; Pastor, M A

    2016-01-01

    One key aspect of motivation is the ability of agents to overcome excessive weighting of intrinsic subjective costs. This contribution aims to analyze the subjective cost of effort and assess its neural correlates in sedentary volunteers. We recruited a sample of 57 subjects who underwent a decision-making task using a prospective, moderate, and sustained physical effort as devaluating factor. Effort discounting followed a hyperbolic function, and individual discounting constants correlated with an indicator of sedentary lifestyle (global physical activity questionnaire; R=-0.302, P=0.033). A subsample of 24 sedentary volunteers received a functional magnetic resonance imaging scan while performing a similar effort-discounting task. BOLD signal of a cluster located in the dorsomedial prefrontal cortex correlated with the subjective value of the pair of options under consideration (Z>2.3, P<0.05; cluster corrected for multiple comparisons for the whole brain). Furthermore, effort-related discounting of reward correlated with the signal of a cluster in the ventrolateral prefrontal cortex (Z>2.3, P<0.05; small volume cluster corrected for a region of interest including the ventral prefrontal cortex and striatum). This study offers empirical data about the intrinsic subjective cost of effort and its neural correlates in sedentary individuals. © 2016 Elsevier B.V. All rights reserved.

  6. Highly Integrated Quality Assurance – An Empirical Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drake Kirkham; Amy Powell; Lucas Rich

    2011-02-01

    Highly Integrated Quality Assurance – An Empirical Case Drake Kirkham1, Amy Powell2, Lucas Rich3 1Quality Manager, Radioisotope Power Systems (RPS) Program, Idaho National Laboratory, P.O. Box 1625 M/S 6122, Idaho Falls, ID 83415-6122 2Quality Engineer, RPS Program, Idaho National Laboratory 3Quality Engineer, RPS Program, Idaho National Laboratory Contact: Voice: (208) 533-7550 Email: Drake.Kirkham@inl.gov Abstract. The Radioisotope Power Systems Program of the Idaho National Laboratory makes an empirical case for a highly integrated Quality Assurance function pertaining to the preparation, assembly, testing, storage and transportation of 238Pu fueled radioisotope thermoelectric generators. Case data represents multiple campaigns including the Pluto/New Horizons mission,more » the Mars Science Laboratory mission in progress, and other related projects. Traditional Quality Assurance models would attempt to reduce cost by minimizing the role of dedicated Quality Assurance personnel in favor of either functional tasking or peer-based implementations. Highly integrated Quality Assurance adds value by placing trained quality inspectors on the production floor side-by-side with nuclear facility operators to enhance team dynamics, reduce inspection wait time, and provide for immediate, independent feedback. Value is also added by maintaining dedicated Quality Engineers to provide for rapid identification and resolution of corrective action, enhanced and expedited supply chain interfaces, improved bonded storage capabilities, and technical resources for requirements management including data package development and Certificates of Inspection. A broad examination of cost-benefit indicates highly integrated Quality Assurance can reduce cost through the mitigation of risk and reducing administrative burden thereby allowing engineers to be engineers, nuclear operators to be nuclear operators, and the cross-functional team to operate more efficiently. Applicability of this case extends to any high-value, long-term project where traceability and accountability are determining factors.« less

  7. Frequency of alcohol consumption in humans; the role of metabotropic glutamate receptors and downstream signaling pathways.

    PubMed

    Meyers, J L; Salling, M C; Almli, L M; Ratanatharathorn, A; Uddin, M; Galea, S; Wildman, D E; Aiello, A E; Bradley, B; Ressler, K; Koenen, K C

    2015-06-23

    Rodent models implicate metabotropic glutamate receptors (mGluRs) and downstream signaling pathways in addictive behaviors through metaplasticity. One way mGluRs can influence synaptic plasticity is by regulating the local translation of AMPA receptor trafficking proteins via eukaryotic elongation factor 2 (eEF2). However, genetic variation in this pathway has not been examined with human alcohol use phenotypes. Among a sample of adults living in Detroit, Michigan (Detroit Neighborhood Health Study; n = 788; 83% African American), 206 genetic variants across the mGluR-eEF2-AMPAR pathway (including GRM1, GRM5, HOMER1, HOMER2, EEF2K, MTOR, EIF4E, EEF2, CAMK2A, ARC, GRIA1 and GRIA4) were found to predict number of drinking days per month (corrected P-value < 0.01) when considered as a set (set-based linear regression conducted in PLINK). In addition, a CpG site located in the 3'-untranslated region on the north shore of EEF2 (cg12255298) was hypermethylated in those who drank more frequently (P < 0.05). Importantly, the association between several genetic variants within the mGluR-eEF2-AMPAR pathway and alcohol use behavior (i.e., consumption and alcohol-related problems) replicated in the Grady Trauma Project (GTP), an independent sample of adults living in Atlanta, Georgia (n = 1034; 95% African American), including individual variants in GRM1, GRM5, EEF2, MTOR, GRIA1, GRIA4 and HOMER2 (P < 0.05). Gene-based analyses conducted in the GTP indicated that GRM1 (empirical P < 0.05) and EEF2 (empirical P < 0.01) withstood multiple test corrections and predicted increased alcohol consumption and related problems. In conclusion, insights from rodent studies enabled the identification of novel human alcohol candidate genes within the mGluR-eEF2-AMPAR pathway.

  8. Mad cows, terrorism and junk food: should public policy reflect perceived or objective risks?

    PubMed

    Johansson-Stenman, Olof

    2008-03-01

    Empirical evidence suggests that people's risk-perceptions are often systematically biased. This paper develops a simple framework to analyse public policy when this is the case. Expected utility (well-being) is shown to depend on both objective and perceived risks (beliefs). The latter are important because of the fear associated with the risk and as a basis for corrective taxation and second-best adjustments. Optimality rules for public provision of risk-reducing investments, "internality-correcting" taxation (e.g. fat taxes) and provision of costly information to reduce people's risk-perception bias are presented.

  9. Revisiting Synchronous Computer-Mediated Communication: Learner Perception and the Meaning of Corrective Feedback

    ERIC Educational Resources Information Center

    Kim, Hye Yeong

    2014-01-01

    Effectively exploring the efficacy of synchronous computer-mediated communication (SCMC) for pedagogical purposes can be achieved through the careful investigation of potentially beneficial, inherent attributes of SCMC. This study provides empirical evidence for the capacity of task-based SCMC to draw learner attention to linguistic forms by…

  10. Anisotropy and temperature dependence of structural, thermodynamic, and elastic properties of crystalline cellulose Iβ: a first-principles investigation

    Treesearch

    ShunLi Shang; Louis G. Hector Jr.; Paul Saxe; Zi-Kui Liu; Robert J. Moon; Pablo D. Zavattieri

    2014-01-01

    Anisotropy and temperature dependence of structural, thermodynamic and elastic properties of crystalline cellulose Iβ were computed with first-principles density functional theory (DFT) and a semi-empirical correction for van der Waals interactions. Specifically, we report the computed temperature variation (up to 500...

  11. Judgments of Widely Held Beliefs about Psychological Phenomena among South African Postgraduate Psychology Students

    ERIC Educational Resources Information Center

    Kagee, A.; Harper, M.; Spies, G.

    2008-01-01

    Lay understandings of human cognition, affect, and behaviour often diverge from the findings of scientific investigations. The present study examined South African fourth year psychology students' judgments about the factual correctness of statements of psychological phenomena that have been demonstrated to be incorrect by empirical research.…

  12. Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Fan, Xitao

    This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…

  13. Inmate Sexual Aggression: Some Evolving Propositions, Empirical Findings, and Mitigating Counter-Forces.

    ERIC Educational Resources Information Center

    Nacci, Peter L.; Kane, Thomas R.

    1984-01-01

    Updates the US Bureau of Prisons' investigation of inmate sexual aggression, and contrasts findings from the federal study with other reports. Discusses the Federal Bureau of Prisons' policy on homosexual activity and family visitation programs and describes some processes in corrections which will make prisons generally safer. (JAC)

  14. Strange Bedfellows? Reaffirming Rehabilitation and Prison Privatization

    ERIC Educational Resources Information Center

    Wright, Kevin A.

    2010-01-01

    Private prisons are here to stay irrespective of empirical findings for or against their existence in the corrections industry. It is necessary, therefore, to step back and consider them on a broader level to assess how they can benefit current penological practice. It will be argued that prison privatization creates an opportunity to reassess the…

  15. Inmate Recidivism as a Measure of Private Prison Performance

    ERIC Educational Resources Information Center

    Spivak, Andrew L.; Sharp, Susan F.

    2008-01-01

    The growth of the private corrections industry has elicited interest in the comparative performance of state and private prisons. One way to measure the service quality of private prisons is to examine inmates' postrelease performance. Current empirical evidence is limited to four studies, all conducted in Florida. This analysis replicates and…

  16. Publication Bias in Research Synthesis: Sensitivity Analysis Using A Priori Weight Functions

    ERIC Educational Resources Information Center

    Vevea, Jack L.; Woods, Carol M.

    2005-01-01

    Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for…

  17. Assessing Static and Dynamic Influences on Inmate Violence Levels

    ERIC Educational Resources Information Center

    Steiner, Benjamin

    2009-01-01

    Inmate misconduct creates problems for other inmates as well as correctional staff. Most empirical assessments of the correlates of inmate misconduct have been conducted at the individual level; however, a facility's level of misconduct may be of equal importance to prison management and state officials because these numbers can reflect order, or…

  18. Estimation of an Occupational Choice Model when Occupations Are Misclassified

    ERIC Educational Resources Information Center

    Sullivan, Paul

    2009-01-01

    This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…

  19. Designing EEG Neurofeedback Procedures to Enhance Open-Ended versus Closed-Ended Creative Potentials

    ERIC Educational Resources Information Center

    Lin, Wei-Lun; Shih, Yi-Ling

    2016-01-01

    Recent empirical evidence demonstrated that open-ended creativity (which refers to creativity measures that require various and numerous responses, such as divergent thinking) correlated with alpha brain wave activation, whereas closed-ended creativity (which refers to creativity measures that ask for one final correct answer, such as insight…

  20. Mindfulness as an organizational capability: Evidence from wildland firefighting

    Treesearch

    Michelle Barton; Kathleen Sutcliffe

    2008-01-01

    Mindful organizing has been proposed as an adaptive form for unpredictable, unknowable environments. Mindfulness induces a rich awareness of details and facilitates the discovery and correction of ill-structured contingencies so that adaptations can be made as action unfolds. Although these ideas are appealing, empirical studies examining mindfulness and its effects...

  1. Artistic Praxis and the Neoliberalization of the Educational Space

    ERIC Educational Resources Information Center

    Gielen, Pascal

    2013-01-01

    Referring to the work of Richard Sennett, this article puts forth the proposition that art production is possible only when there is a correct relation between theory and artistic practice. An effective artistic praxis can only be realized by incorporating theory in artistic practices. Based on empirical research, the author elaborates on the…

  2. Some Empirical Evidence for Latent Trait Model Selection.

    ERIC Educational Resources Information Center

    Hutten, Leah R.

    The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…

  3. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  4. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Empirical Corrections for MISR Calibration Temporal Trends, Point-Spread Function, Flat-Fielding, and Ghosting

    NASA Astrophysics Data System (ADS)

    Limbacher, J.; Kahn, R. A.

    2015-12-01

    MISR aerosol optical depth retrievals are fairly robust to small radiometric calibration artifacts, due to the multi-angle observations. However, even small errors in the MISR calibration, especially structured artifacts in the imagery, have a disproportionate effect on the retrieval of aerosol properties from these data. Using MODIS, POLDER-3, AERONET, MAN, and MISR lunar images, we diagnose and correct various calibration and radiometric artifacts found in the MISR radiance (Level 1) data, using empirical image analysis. The calibration artifacts include temporal trends in MISR top-of-atmosphere reflectance at relatively stable desert sites and flat-fielding artifacts detected by comparison to MODIS over bright, low-contrast scenes. The radiometric artifacts include ghosting (as compared to MODIS, POLDER-3, and forward model results) and point-spread function mischaracterization (using the MISR lunar data and MODIS). We minimize the artifacts to the extent possible by parametrically modeling the artifacts and then removing them from the radiance (reflectance) data. Validation is performed using non-training scenes (reflectance comparison), and also by using the MISR Research Aerosol retrieval algorithm results compared to MAN and AERONET.

  6. Reliable prediction of three-body intermolecular interactions using dispersion-corrected second-order Møller-Plesset perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yuanhang; Beran, Gregory J. O., E-mail: gregory.beran@ucr.edu

    2015-07-28

    Three-body and higher intermolecular interactions can play an important role in molecular condensed phases. Recent benchmark calculations found problematic behavior for many widely used density functional approximations in treating 3-body intermolecular interactions. Here, we demonstrate that the combination of second-order Møller-Plesset (MP2) perturbation theory plus short-range damped Axilrod-Teller-Muto (ATM) dispersion accurately describes 3-body interactions with reasonable computational cost. The empirical damping function used in the ATM dispersion term compensates both for the absence of higher-order dispersion contributions beyond the triple-dipole ATM term and non-additive short-range exchange terms which arise in third-order perturbation theory and beyond. Empirical damping enables this simplemore » model to out-perform a non-expanded coupled Kohn-Sham dispersion correction for 3-body intermolecular dispersion. The MP2 plus ATM dispersion model approaches the accuracy of O(N{sup 6}) methods like MP2.5 or even spin-component-scaled coupled cluster models for 3-body intermolecular interactions with only O(N{sup 5}) computational cost.« less

  7. Study of the total reaction cross section via QMD

    NASA Astrophysics Data System (ADS)

    Yang, Lin-Meng; Guo, Wen-Jun; Zhang, Fan; Ni, Sheng

    2013-10-01

    This paper presents a new empirical formula to calculate the average nucleon-nucleon (N-N) collision number for the total reaction cross sections (σR). Based on the initial average N-N collision number calculated by quantum molecular dynamics (QMD), quantum correction and Coulomb correction are taken into account within it. The average N-N collision number is calculated by this empirical formula. The total reaction cross sections are obtained within the framework of the Glauber theory. σR of 23Al+12C, 24Al+12C, 25 Al+12C, 26Al+12C and 27Al+12C are calculated in the range of low energy. We also calculate the σR of 27Al+12C with different incident energies. The calculated σR are compared with the experimental data and the results of Glauber theory including the σR of both spherical nuclear and deformed nuclear. It is seen that the calculated σR are larger than σR of spherical nuclear and smaller than σR of deformed nuclear, whereas the results agree well with the experimental data in low-energy range.

  8. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  9. Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).

    PubMed

    Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young

    2016-04-01

    Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.

  10. Beck, Asia and second modernity.

    PubMed

    Calhoun, Craig

    2010-09-01

    The work of Ulrich Beck has been important in bringing sociological attention to the ways issues of risk are embedded in contemporary globalization, in developing a theory of 'reflexive modernization', and in calling for social science to transcend 'methodological nationalism'. In recent studies, he and his colleagues help to correct for the Western bias of many accounts of cosmopolitanism and reflexive modernization, and seek to distinguish normative goals from empirical analysis. In this paper I argue that further clarification of this latter distinction is needed but hard to reach within a framework that still embeds the normative account in the idea that empirical change has a clear direction. Similar issues beset the presentation of diverse patterns in recent history as all variants of 'second modernity'. Lastly, I note that ironically, given the declared 'methodological cosmopolitanism' of the authors, the empirical studies here all focus on national cases. © London School of Economics and Political Science 2010.

  11. A new parameterization of an empirical model for wind/ocean scatterometry

    NASA Technical Reports Server (NTRS)

    Woiceshyn, P. M.; Wurtele, M. G.; Boggs, D. H.; Mcgoldrick, L. F.; Peteherych, S.

    1984-01-01

    The power law form of the SEASAT A Scatterometer System (SASS) empirical backscatter-to-wind model function does not uniformly meet the instrument performance over the range 4 to 24 /ms. Analysis indicates that the horizontal polarization (H-Pol) and vertical polarization (V-Pol) components of the benchmark SASS1 model function yield self-consistent results only for a small mid-range of speeds at larger incidence angles, and for a somewhat larger range of speeds at smaller incidence angles. Comparison of SASS1 to in situ data over the Gulf of Alaska region further underscores the shortcomings of the power law form. Finally, a physically based empirical SASS model is proposed which corrects some of the deficiencies of power law models like SASS1. The new model allows the mutual determination of sea surface wind stress and wind speed in a consistent manner from SASS backscatter measurements.

  12. Arbitrary-order corrections for finite-time drift and diffusion coefficients

    NASA Astrophysics Data System (ADS)

    Anteneodo, C.; Riera, R.

    2009-09-01

    We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.

  13. Childhood urinary tract infection caused by extended-spectrum β-lactamase-producing bacteria: Risk factors and empiric therapy.

    PubMed

    Uyar Aksu, Nihal; Ekinci, Zelal; Dündar, Devrim; Baydemir, Canan

    2017-02-01

    This study investigated risk factors of childhood urinary tract infection (UTI) associated with extended-spectrum β-lactamase (ESBL)-producing bacteria (ESBL-positive UTI) and evaluated antimicrobial resistance as well as empiric treatment of childhood UTI. The records of children with positive urine culture between 1 January 2008 and 31 December 2012 were evaluated. Patients with positive urine culture for ESBL-producing bacteria were defined as the ESBL-positive group, whereas patients of the same gender and similar age with positive urine culture for non-ESBL-producing bacteria were defined as the ESBL-negative group. Each ESBL-positive patient was matched with two ESBL-negative patients. The ESBL-positive and negative groups consisted of 154 and 308 patients, respectively. Potential risk factors for ESBL-positive UTI were identified as presence of underlying disease, clean intermittent catheterization (CIC), hospitalization, use of any antibiotic and history of infection in the last 3 months (P < 0.05). On logistic regression analysis, CIC, hospitalization and history of infection in the last 3 months were identified as independent risk factors. In the present study, 324 of 462 patients had empiric therapy. Empiric therapy was inappropriate in 90.3% of the ESBL-positive group and in 4.5% of the ESBL-negative group. Resistance to nitrofurantoin was similar between groups (5.1% vs 1.2%, P = 0.072); resistance to amikacin was low in the ESBL-positive group (2.6%) and there was no resistance in the ESBL-negative group. Clean intermittent catheterization, hospitalization and history of infection in the last 3 months should be considered as risk factors for ESBL-positive UTI. The combination of ampicillin plus amikacin should be taken into consideration for empiric therapy in patients with acute pyelonephritis who have the risk factors for ESBL-positive UTI. Nitrofurantoin seems to be a logical choice for the empiric therapy of cystitis. © 2016 Japan Pediatric Society.

  14. Two-dimensional simulation of eccentric photorefraction images for ametropes: factors influencing the measurement.

    PubMed

    Wu, Yifei; Thibos, Larry N; Candy, T Rowan

    2018-05-07

    Eccentric photorefraction and Purkinje image tracking are used to estimate refractive state and eye position simultaneously. Beyond vision screening, they provide insight into typical and atypical visual development. Systematic analysis of the effect of refractive error and spectacles on photorefraction data is needed to gauge the accuracy and precision of the technique. Simulation of two-dimensional, double-pass eccentric photorefraction was performed (Zemax). The inward pass included appropriate light sources, lenses and a single surface pupil plane eye model to create an extended retinal image that served as the source for the outward pass. Refractive state, as computed from the luminance gradient in the image of the pupil captured by the model's camera, was evaluated for a range of refractive errors (-15D to +15D), pupil sizes (3 mm to 7 mm) and two sets of higher-order monochromatic aberrations. Instrument calibration was simulated using -8D to +8D trial lenses at the spectacle plane for: (1) vertex distances from 3 mm to 23 mm, (2) uncorrected and corrected hyperopic refractive errors of +4D and +7D, and (3) uncorrected and corrected astigmatism of 4D at four different axes. Empirical calibration of a commercial photorefractor was also compared with a wavefront aberrometer for human eyes. The pupil luminance gradient varied linearly with refractive state for defocus less than approximately 4D (5 mm pupil). For larger errors, the gradient magnitude saturated and then reduced, leading to under-estimation of refractive state. Additional inaccuracy (up to 1D for 8D of defocus) resulted from spectacle magnification in the pupil image, which would reduce precision in situations where vertex distance is variable. The empirical calibration revealed a constant offset between the two clinical instruments. Computational modelling demonstrates the principles and limitations of photorefraction to help users avoid potential measurement errors. Factors that could cause clinically significant errors in photorefraction estimates include high refractive error, vertex distance and magnification effects of a spectacle lens, increased higher-order monochromatic aberrations, and changes in primary spherical aberration with accommodation. The impact of these errors increases with increasing defocus. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  15. Empirical determination of collimator scatter data for use in Radcalc commercial monitor unit calculation software: Implication for prostate volumetric modulated-arc therapy calculations.

    PubMed

    Richmond, Neil; Tulip, Rachael; Walker, Chris

    2016-01-01

    The aim of this work was to determine, by measurement and independent monitor unit (MU) check, the optimum method for determining collimator scatter for an Elekta Synergy linac with an Agility multileaf collimator (MLC) within Radcalc, a commercial MU calculation software package. The collimator scatter factors were measured for 13 field shapes defined by an Elekta Agility MLC on a Synergy linac with 6MV photons. The value of the collimator scatter associated with each field was also calculated according to the equation Sc=Sc(mlc)+Sc(corr)(Sc(open)-Sc(mlc)) with Sc(corr) varied between 0 and 1, where Sc(open) is the value of collimator scatter calculated from the rectangular collimator-defined field and Sc(mlc) the value using only the MLC-defined field shape by applying sector integration. From this the optimum value of the correction was determined as that which gives the minimum difference between measured and calculated Sc. Single (simple fluence modulation) and dual-arc (complex fluence modulation) treatment plans were generated on the Monaco system for prostate volumetric modulated-arc therapy (VMAT) delivery. The planned MUs were verified by absolute dose measurement in phantom and by an independent MU calculation. The MU calculations were repeated with values of Sc(corr) between 0 and 1. The values of the correction yielding the minimum MU difference between treatment planning system (TPS) and check MU were established. The empirically derived value of Sc(corr) giving the best fit to the measured collimator scatter factors was 0.49. This figure however was not found to be optimal for either the single- or dual-arc prostate VMAT plans, which required 0.80 and 0.34, respectively, to minimize the differences between the TPS and independent-check MU. Point dose measurement of the VMAT plans demonstrated that the TPS MUs were appropriate for the delivered dose. Although the value of Sc(corr) may be obtained by direct comparison of calculation with measurement, the efficacy of the value determined for VMAT-MU calculations are very much dependent on the complexity of the MLC delivery. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  16. Evaluating simplified methods for liquefaction assessment for loss estimation

    NASA Astrophysics Data System (ADS)

    Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia

    2017-06-01

    Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed.

  17. An empirical and model study on automobile market in Taiwan

    NASA Astrophysics Data System (ADS)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  18. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  19. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees.

    PubMed

    Misof, Bernhard; Meyer, Benjamin; von Reumont, Björn Marcus; Kück, Patrick; Misof, Katharina; Meusemann, Karen

    2013-12-03

    Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30%. With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the usually used simple selections of taxa and genes with high data coverage.

  20. Airborne electromagnetic bathymetry investigations in Port Lincoln, South Australia - comparison with an equivalent floating transient electromagnetic system

    NASA Astrophysics Data System (ADS)

    Vrbancich, Julian

    2011-09-01

    Helicopter time-domain airborne electromagnetic (AEM) methodology is being investigated as a reconnaissance technique for bathymetric mapping in shallow coastal waters, especially in areas affected by water turbidity where light detection and ranging (LIDAR) and hyperspectral techniques may be limited. Previous studies in Port Lincoln, South Australia, used a floating AEM time-domain system to provide an upper limit to the expected bathymetric accuracy based on current technology for AEM systems. The survey lines traced by the towed floating system were also flown with an airborne system using the same transmitter and receiver electronic instrumentation, on two separate occasions. On the second occasion, significant improvements had been made to the instrumentation to reduce the system self-response at early times. A comparison of the interpreted water depths obtained from the airborne and floating systems is presented, showing the degradation in bathymetric accuracy obtained from the airborne data. An empirical data correction method based on modelled and observed EM responses over deep seawater (i.e. a quasi half-space response) at varying survey altitudes, combined with known seawater conductivity measured during the survey, can lead to significant improvements in interpreted water depths and serves as a useful method for checking system calibration. Another empirical data correction method based on observed and modelled EM responses in shallow water was shown to lead to similar improvements in interpreted water depths; however, this procedure is notably inferior to the quasi half-space response because more parameters need to be assumed in order to compute the modelled EM response. A comparison between the results of the two airborne surveys in Port Lincoln shows that uncorrected data obtained from the second airborne survey gives good agreement with known water depths without the need to apply any empirical corrections to the data. This result significantly decreases the data-processing time thereby enabling the AEM method to serve as a rapid reconnaissance technique for bathymetric mapping.

  1. Impact of quality of evidence on the strength of recommendations: an empirical study

    PubMed Central

    Djulbegovic, Benjamin; Trikalinos, Thomas A; Roback, John; Chen, Ren; Guyatt, Gordon

    2009-01-01

    Background Evidence is necessary but not sufficient for decision-making, such as making recommendations by clinical practice guideline panels. However, the fundamental premise of evidence-based medicine (EBM) rests on the assumed link between the quality of evidence and "truth" and/or correctness in making guideline recommendations. If this assumption is accurate, then the quality of evidence ought to play a key role in making guideline recommendations. Surprisingly, and despite the widespread penetration of EBM in health care, there has been no empirical research to date investigating the impact of quality of evidence on the strength of recommendations made by guidelines panels. Methods The American Association of Blood Banking (AABB) has recently convened a 12 member panel to develop clinical practice guidelines (CPG) for the use of fresh-frozen plasma (FFP) for 6 different clinical indications. The panel was instructed that 4 factors should play a role in making recommendation: quality of evidence, uncertainty about the balance between desirable (benefits) and undesirable effects (harms), uncertainty or variability in values and preferences, and uncertainty about whether the intervention represents a wise use of resources (costs). Each member of the panel was asked to make his/her final judgments on the strength of recommendation and the overall quality of the body of evidence. "Voting" was anonymous and was based on the use of GRADE (Grading quality of evidence and strength of recommendations) system, which clearly distinguishes between quality of evidence and strength of recommendations. Results Despite the fact that many factors play role in formulating CPG recommendations, we show that when the quality of evidence is higher, the probability of making a strong recommendation for or against an intervention dramatically increases. Probability of making strong recommendation was 62% when evidence is "moderate", while it was only 23% and 13% when evidence was "low" or "very low", respectively. Conclusion We report the first empirical evaluation of the relationship between quality of evidence pertinent to a clinical question and strength of the corresponding guideline recommendations. Understanding the relationship between quality of evidence and probability of making (strong) recommendation has profound implications for the science of quality measurement in health care. PMID:19622148

  2. Genotype imputation in a coalescent model with infinitely-many-sites mutation

    PubMed Central

    Huang, Lucy; Buzbas, Erkan O.; Rosenberg, Noah A.

    2012-01-01

    Empirical studies have identified population-genetic factors as important determinants of the properties of genotype-imputation accuracy in imputation-based disease association studies. Here, we develop a simple coalescent model of three sequences that we use to explore the theoretical basis for the influence of these factors on genotype-imputation accuracy, under the assumption of infinitely-many-sites mutation. Employing a demographic model in which two populations diverged at a given time in the past, we derive the approximate expectation and variance of imputation accuracy in a study sequence sampled from one of the two populations, choosing between two reference sequences, one sampled from the same population as the study sequence and the other sampled from the other population. We show that under this model, imputation accuracy—as measured by the proportion of polymorphic sites that are imputed correctly in the study sequence—increases in expectation with the mutation rate, the proportion of the markers in a chromosomal region that are genotyped, and the time to divergence between the study and reference populations. Each of these effects derives largely from an increase in information available for determining the reference sequence that is genetically most similar to the sequence targeted for imputation. We analyze as a function of divergence time the expected gain in imputation accuracy in the target using a reference sequence from the same population as the target rather than from the other population. Together with a growing body of empirical investigations of genotype imputation in diverse human populations, our modeling framework lays a foundation for extending imputation techniques to novel populations that have not yet been extensively examined. PMID:23079542

  3. HIV-related sexual risk behavior among African American adolescent girls.

    PubMed

    Danielson, Carla Kmett; Walsh, Kate; McCauley, Jenna; Ruggiero, Kenneth J; Brown, Jennifer L; Sales, Jessica M; Rose, Eve; Wingood, Gina M; Diclemente, Ralph J

    2014-05-01

    Latent class analysis (LCA) is a useful statistical tool that can be used to enhance understanding of how various patterns of combined sexual behavior risk factors may confer differential levels of HIV infection risk and to identify subtypes among African American adolescent girls. Data for this analysis is derived from baseline assessments completed prior to randomization in an HIV prevention trial. Participants were African American girls (n=701) aged 14-20 years presenting to sexual health clinics. Girls completed an audio computer-assisted self-interview, which assessed a range of variables regarding sexual history and current and past sexual behavior. Two latent classes were identified with the probability statistics for the two groups in this model being 0.89 and 0.88, respectively. In the final multivariate model, class 1 (the "higher risk" group; n=331) was distinguished by a higher likelihood of >5 lifetime sexual partners, having sex while high on alcohol/drugs, less frequent condom use, and history of sexually transmitted diseases (STDs), when compared with class 2 (the "lower risk" group; n=370). The derived model correctly classified 85.3% of participants into the two groups and accounted for 71% of the variance in the latent HIV-related sexual behavior risk variable. The higher risk class also had worse scores on all hypothesized correlates (e.g., self-esteem, history of sexual assault or physical abuse) relative to the lower risk class. Sexual health clinics represent a unique point of access for HIV-related sexual risk behavior intervention delivery by capitalizing on contact with adolescent girls when they present for services. Four empirically supported risk factors differentiated higher versus lower HIV risk. Replication of these findings is warranted and may offer an empirical basis for parsimonious screening recommendations for girls presenting for sexual healthcare services.

  4. A Multicenter Evaluation of Prolonged Empiric Antibiotic Therapy in Adult ICUs in the United States.

    PubMed

    Thomas, Zachariah; Bandali, Farooq; Sankaranarayanan, Jayashri; Reardon, Tom; Olsen, Keith M

    2015-12-01

    The purpose of this study is to determine the rate of prolonged empiric antibiotic therapy in adult ICUs in the United States. Our secondary objective is to examine the relationship between the prolonged empiric antibiotic therapy rate and certain ICU characteristics. Multicenter, prospective, observational, 72-hour snapshot study. Sixty-seven ICUs from 32 hospitals in the United States. Nine hundred ninety-eight patients admitted to the ICU between midnight on June 20, 2011, and June 21, 2011, were included in the study. None. Antibiotic orders were categorized as prophylactic, definitive, empiric, or prolonged empiric antibiotic therapy. Prolonged empiric antibiotic therapy was defined as empiric antibiotics that continued for at least 72 hours in the absence of adjudicated infection. Standard definitions from the Centers for Disease Control and Prevention were used to determine infection. Prolonged empiric antibiotic therapy rate was determined as the ratio of the total number of empiric antibiotics continued for at least 72 hours divided by the total number of empiric antibiotics. Univariate analysis of factors associated with the ICU prolonged empiric antibiotic therapy rate was conducted using Student t test. A total of 660 unique antibiotics were prescribed as empiric therapy to 364 patients. Of the empiric antibiotics, 333 of 660 (50%) were continued for at least 72 hours in instances where Centers for Disease Control and Prevention infection criteria were not met. Suspected pneumonia accounted for approximately 60% of empiric antibiotic use. The most frequently prescribed empiric antibiotics were vancomycin and piperacillin/tazobactam. ICUs that utilized invasive techniques for the diagnosis of ventilator-associated pneumonia had lower rates of prolonged empiric antibiotic therapy than those that did not, 45.1% versus 59.5% (p = 0.03). No other institutional factor was significantly associated with prolonged empiric antibiotic therapy rate. Half of all empiric antibiotics ordered in critically ill patients are continued for at least 72 hours in absence of adjudicated infection. Additional studies are needed to confirm these findings and determine the risks and benefits of prolonged empiric therapy in the critically ill.

  5. Study on the leakage flow through a clearance gap between two stationary walls

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Billdal, J. T.; Nielsen, T. K.; Brekke, H.

    2012-11-01

    In the present paper, the leakage flow in the clearance gap between stationary walls was studied experimentally, theoretically and numerically by the computational fluid dynamics (CFD) in order to find the relationship between leakage flow, pressure difference and clearance gap. The experimental set-up of the clearance gap between two stationary walls is the simplification of the gap between the guide vane faces and facing plates in Francis turbines. This model was built in the Waterpower laboratory at Norwegian University of Science and Technology (NTNU). The empirical formula for calculating the leakage flow rate between the two stationary walls was derived from the empirical study. The experimental model is simulated by computational fluid dynamics employing the ANSYS CFX commercial software in order to study the flow structure. Both numerical simulation results and empirical formula results are in good agreement with the experimental results. The correction of the empirical formula is verified by experimental data and has been proven to be very useful in terms of quickly predicting the leakage flow rate in the guide vanes for hydraulic turbines.

  6. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  7. Intensity-Value Corrections for Integrating Sphere Measurements of Solid Samples Measured Behind Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy J.; Bernacki, Bruce E.; Redding, Rebecca L.

    2014-11-01

    Accurate and calibrated directional-hemispherical reflectance spectra of solids are important for both in situ and remote sensing. Many solids are in the form of powders or granules and to measure their diffuse reflectance spectra in the laboratory, it is often necessary to place the samples behind a transparent medium such as glass for the ultraviolet (UV), visible, or near-infrared spectral regions. Using both experimental methods and a simple optical model, we demonstrate that glass (fused quartz in our case) leads to artifacts in the reflectance values. We report our observations that the measured reflectance values, for both hemispherical and diffusemore » reflectance, are distorted by the additional reflections arising at the air–quartz and sample–quartz interfaces. The values are dependent on the sample reflectance and are offset in intensity in the hemispherical case, leading to measured values up to ~6% too high for a 2% reflectance surface, ~3.8% too high for 10% reflecting surfaces, approximately correct for 40–60% diffuse-reflecting surfaces, and ~1.5% too low for 99% reflecting Spectralon® surfaces. For the case of diffuse-only reflectance, the measured values are uniformly too low due to the polished glass, with differences of nearly 6% for a 99% reflecting matte surface. The deviations arise from the added reflections from the quartz surfaces, as verified by both theory and experiment, and depend on sphere design. Finally, empirical correction factors were implemented into post-processing software to redress the artifact for hemispherical and diffuse reflectance data across the 300–2300 nm range.« less

  8. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  9. An empirical approach to improving tidal predictions using recent real-time tide gauge data

    NASA Astrophysics Data System (ADS)

    Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry

    2014-05-01

    Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.

  10. Evaluation of rock classifications at B. C. Rail tumbler ridge tunnels

    NASA Astrophysics Data System (ADS)

    Kaiser, Peter K.; Mackay, C.; Gale, A. D.

    1986-10-01

    Construction of four single track railway tunnels through sedimentary rocks in central British Columbia, Canada, provided an excellent opportunity to compare various rock mass classification systems and to evaluate their applicability to the local geology. The tunnels were excavated by conventional drilling and blasting techniques and supported primarily with rock bolts and shotcrete, and with steel sets in some sections. After a brief project description including tunnel construction techniques, local geology and groundwater conditions, the data collection and filed mapping procedure is reviewed. Four rock mass classification systems ( RQD, RSR, RMR, Q) for empirical tunnel design are reviewed and relevant factors for the data interpretation are discussed. In comparing and evaluating the performance of these classification systems three aspects received special attention. The tunnel support predicted by the various systems was compared to the support installed, a unique correlation between the two most useful and most frequently applied classifications, the RMR and Q systems, was established and assessed, and finally, the non-support limit and size effect were evaluated. It is concluded that the Q-system best predicted the required tunnel support and that the RMR was only adequate after adjustment for the influence of opening size. Correction equations for opening size effects are presented for the RMR system. The RSR and RQD systems are not recommended for empirical tunnel design.

  11. Optimal rates for phylogenetic inference and experimental design in the era of genome-scale datasets.

    PubMed

    Dornburg, Alex; Su, Zhuo; Townsend, Jeffrey P

    2018-06-25

    With the rise of genome- scale datasets there has been a call for increased data scrutiny and careful selection of loci appropriate for attempting the resolution of a phylogenetic problem. Such loci are desired to maximize phylogenetic information content while minimizing the risk of homoplasy. Theory posits the existence of characters that evolve under such an optimum rate, and efforts to determine optimal rates of inference have been a cornerstone of phylogenetic experimental design for over two decades. However, both theoretical and empirical investigations of optimal rates have varied dramatically in their conclusions: spanning no relationship to a tight relationship between the rate of change and phylogenetic utility. Here we synthesize these apparently contradictory views, demonstrating both empirical and theoretical conditions under which each is correct. We find that optimal rates of characters-not genes-are generally robust to most experimental design decisions. Moreover, consideration of site rate heterogeneity within a given locus is critical to accurate predictions of utility. Factors such as taxon sampling or the targeted number of characters providing support for a topology are additionally critical to the predictions of phylogenetic utility based on the rate of character change. Further, optimality of rates and predictions of phylogenetic utility are not equivalent, demonstrating the need for further development of comprehensive theory of phylogenetic experimental design.

  12. Endohedral gallide cluster superconductors and superconductivity in ReGa5

    PubMed Central

    Xie, Weiwei; Luo, Huixia; Phelan, Brendan F.; Klimczuk, Tomasz; Cevallos, Francois Alexandre; Cava, Robert Joseph

    2015-01-01

    We present transition metal-embedded (T@Gan) endohedral Ga-clusters as a favorable structural motif for superconductivity and develop empirical, molecule-based, electron counting rules that govern the hierarchical architectures that the clusters assume in binary phases. Among the binary T@Gan endohedral cluster systems, Mo8Ga41, Mo6Ga31, Rh2Ga9, and Ir2Ga9 are all previously known superconductors. The well-known exotic superconductor PuCoGa5 and related phases are also members of this endohedral gallide cluster family. We show that electron-deficient compounds like Mo8Ga41 prefer architectures with vertex-sharing gallium clusters, whereas electron-rich compounds, like PdGa5, prefer edge-sharing cluster architectures. The superconducting transition temperatures are highest for the electron-poor, corner-sharing architectures. Based on this analysis, the previously unknown endohedral cluster compound ReGa5 is postulated to exist at an intermediate electron count and a mix of corner sharing and edge sharing cluster architectures. The empirical prediction is shown to be correct and leads to the discovery of superconductivity in ReGa5. The Fermi levels for endohedral gallide cluster compounds are located in deep pseudogaps in the electronic densities of states, an important factor in determining their chemical stability, while at the same time limiting their superconducting transition temperatures. PMID:26644566

  13. Uncertainty of rotating shadowband irradiometers and Si-pyranometers including the spectral irradiance error

    NASA Astrophysics Data System (ADS)

    Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank

    2016-05-01

    Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.

  14. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.

    2008-12-01

    Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.

  15. Uncertainty of Rotating Shadowband Irradiometers and Si-Pyranometers Including the Spectral Irradiance Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan

    2016-05-31

    Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible andmore » color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2% for global horizontal irradiance (GHI), and 2.9% for DNI (for GHI and DNI over 300 W/m2) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.« less

  16. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons.

    PubMed

    Mertz, Marcel; Sofaer, Neema; Strech, Daniel

    2014-09-27

    The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?

  17. Review of Empirical Studies on Impact of Religion, Religiosity and Spirituality as Protective Factors

    ERIC Educational Resources Information Center

    Salgado, Ana C.

    2014-01-01

    The purpose of this article is to review the empirical researches supporting the positive impact of religion, religiosity and spirituality as protective factors in various areas of human life. An analysis of each variable is performed individually and collectively. Among the conclusions of this work, researches show that they help people to have…

  18. Empirical Study of the Role of Government Support and Success Factors in Industry-University-Institute Cooperation

    ERIC Educational Resources Information Center

    Zhimin, Guan; Zhongpeng, Cao; Jin, Tao

    2016-01-01

    Empirical research methods were used to study the state of industry-university-institute collaboration in China and the factors influencing the results of cooperation between members of technological innovation alliances, from the dual perspectives of enterprises and universities/research institutes On the basis of questionnaire surveys of 100…

  19. Are stock market returns related to the weather effects? Empirical evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Tsangyao; Nieh, Chien-Chung; Yang, Ming Jing; Yang, Tse-Yu

    2006-05-01

    In this study, we employ a recently developed econometric technique of the threshold model with the GJR-GARCH process on error terms to investigate the relationships between weather factors and stock market returns in Taiwan using daily data for the period of 1 July 1997-22 October 2003. The major weather factors studied include temperature, humidity, and cloud cover. Our empirical evidence shows that temperature and cloud cover are two important weather factors that affect the stock returns in Taiwan. Our empirical findings further support the previous arguments that advocate the inclusion of economically neutral behavioral variables in asset pricing models. These results also have significant implications for individual investors and financial institutions planning to invest in the Taiwan stock market.

  20. Social and Interpersonal Factors Relating to Adolescent Suicidality: A Review of the Literature

    PubMed Central

    King, Cheryl A.; Merchant, Christopher R.

    2009-01-01

    This article reviews the empirical literature concerning social and interpersonal variables as risk factors for adolescent suicidality (suicidal ideation, suicidal behavior, death by suicide. It also describes major social constructs in theories of suicide and the extent to which studies support their importance to adolescent suicidality. PsychINFO and PubMed searches were conducted for empirical studies focused on family and friend support, social isolation, peer victimization, physical/sexual abuse, or emotional neglect as these relate to adolescent suicidality. Empirical findings converge in documenting the importance of multiple social and interpersonal factors to adolescent suicidality. Research support for the social constructs in several major theories of suicide is summarized and research challenges are discussed. PMID:18576200

  1. Autonomous Vehicles Require Socio-Political Acceptance—An Empirical and Philosophical Perspective on the Problem of Moral Decision Making

    PubMed Central

    Bergmann, Lasse T.; Schlicht, Larissa; Meixner, Carmen; König, Peter; Pipa, Gordon; Boshammer, Susanne; Stephan, Achim

    2018-01-01

    Autonomous vehicles, though having enormous potential, face a number of challenges. As a computer system interacting with society on a large scale and human beings in particular, they will encounter situations, which require moral assessment. What will count as right behavior in such situations depends on which factors are considered to be both morally justified and socially acceptable. In an empirical study we investigated what factors people recognize as relevant in driving situations. The study put subjects in several “dilemma” situations, which were designed to isolate different and potentially relevant factors. Subjects showed a surprisingly high willingness to sacrifice themselves to save others, took the age of potential victims in a crash into consideration and were willing to swerve onto a sidewalk if this saved more lives. The empirical insights are intended to provide a starting point for a discussion, ultimately yielding societal agreement whereby the empirical insights should be balanced with philosophical considerations. PMID:29541023

  2. Autonomous Vehicles Require Socio-Political Acceptance-An Empirical and Philosophical Perspective on the Problem of Moral Decision Making.

    PubMed

    Bergmann, Lasse T; Schlicht, Larissa; Meixner, Carmen; König, Peter; Pipa, Gordon; Boshammer, Susanne; Stephan, Achim

    2018-01-01

    Autonomous vehicles, though having enormous potential, face a number of challenges. As a computer system interacting with society on a large scale and human beings in particular, they will encounter situations, which require moral assessment. What will count as right behavior in such situations depends on which factors are considered to be both morally justified and socially acceptable. In an empirical study we investigated what factors people recognize as relevant in driving situations. The study put subjects in several "dilemma" situations, which were designed to isolate different and potentially relevant factors. Subjects showed a surprisingly high willingness to sacrifice themselves to save others, took the age of potential victims in a crash into consideration and were willing to swerve onto a sidewalk if this saved more lives. The empirical insights are intended to provide a starting point for a discussion, ultimately yielding societal agreement whereby the empirical insights should be balanced with philosophical considerations.

  3. Aperture-free star formation rate of SDSS star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Duarte Puertas, S.; Vilchez, J. M.; Iglesias-Páramo, J.; Kehrig, C.; Pérez-Montero, E.; Rosales-Ortega, F. F.

    2017-03-01

    Large area surveys with a high number of galaxies observed have undoubtedly marked a milestone in the understanding of several properties of galaxies, such as star-formation history, morphology, and metallicity. However, in many cases, these surveys provide fluxes from fixed small apertures (e.g. fibre), which cover a scant fraction of the galaxy, compelling us to use aperture corrections to study the global properties of galaxies. In this work, we derive the current total star formation rate (SFR) of Sloan Digital Sky Survey (SDSS) star-forming galaxies, using an empirically based aperture correction of the measured Hα flux for the first time, thus minimising the uncertainties associated with reduced apertures. All the Hα fluxes have been extinction-corrected using the Hα/ Hβ ratio free from aperture effects. The total SFR for 210 000 SDSS star-forming galaxies has been derived applying pure empirical Hα and Hα/ Hβ aperture corrections based on the Calar Alto Legacy Integral Field Area (CALIFA) survey. We find that, on average, the aperture-corrected SFR is 0.65 dex higher than the SDSS fibre-based SFR. The relation between the SFR and stellar mass for SDSS star-forming galaxies (SFR-M⋆) has been obtained, together with its dependence on extinction and Hα equivalent width. We compare our results with those obtained in previous works and examine the behaviour of the derived SFR in six redshift bins, over the redshift range 0.005 ≤ z ≤ 0.22. The SFR-M⋆ sequence derived here is in agreement with selected observational studies based on integral field spectroscopy of individual galaxies as well as with the predictions of recent theoretical models of disc galaxies. A table of the aperture-corrected fluxes and SFR for 210 000 SDSS star-forming galaxies and related relevant data is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A71 Warning, no authors found for 2017A&A...599A..51.

  4. Efficacy of a Computer-Based Program on Acquisition of Reading Skills of Incarcerated Youth

    ERIC Educational Resources Information Center

    Shippen, Margaret E.; Morton, Rhonda Collins; Flynt, Samuel W.; Houchins, David E.; Smitherman, Tracy

    2012-01-01

    Despite the importance of literacy skill training for incarcerated youth, a very limited number of empirically based research studies have examined reading instruction in correctional facilities. The purpose of this study was to determine whether the Fast ForWord computer-assisted reading program improved the reading and spelling abilities of…

  5. Comparing State- Versus Facility-Level Effects on Crowding in U.S. Correctional Facilities

    ERIC Educational Resources Information Center

    Steiner, Benjamin; Wooldredge, John

    2008-01-01

    The literature on prison crowding underscores the potential importance of both state- and facility-level effects on crowding, although empirical research has not assessed these relative effects because of the sole focus on states as units of analysis. This article describes findings from bi-level analyses of crowding across 459 state-operated…

  6. Focus on Form and Corrective Feedback Research at the University of Victoria, Canada

    ERIC Educational Resources Information Center

    Chen, Sibo; Nassaji, Hossein

    2018-01-01

    The Department of Linguistics at University of Victoria (UVic) in Canada has a long-standing tradition of empirical approaches to the study of theoretical and applied linguistics. As part of the Faculty of Humanities, the department caters to students with a wide range of backgrounds and interests, and provides crucial language teaching support in…

  7. Chlorophyll-a concentration estimation with three bio-optical algorithms: correction for the low concentration range for the Yiam Reservoir, Korea

    USDA-ARS?s Scientific Manuscript database

    Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...

  8. Comments on John Willinsky's Learning to Divide the World: Education at Empire's End

    ERIC Educational Resources Information Center

    Wang, Tsung Juang

    2006-01-01

    John Willinsky's view that imperialism and its legacy remain the driving force that divides the world into "superior" and "inferior" cultures fails to take into account other forces that also encourage peoples of different cultures to emphasize the differences between themselves. He is correct in noting that imperialism led to much injustice and…

  9. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  10. Effect of shell corrections on the beta decay isobaric mass parabolas

    NASA Astrophysics Data System (ADS)

    Kaur, Sarbjeet; Kaur, Manpreet; Singh, Bir Bikram

    2018-05-01

    The beta decay isobaric mass parabolas have been studied for isobaric families in di erent mass regions. The mass parabolas have been studied using the semi empirical mass formula of Seeger to find the most stable isobar for a particular isobaric family. In addition to liquid drop part VLDM, the shell correction part δU to give binding energy B. E. = VLDM + δU, defined within Strutinsky renormalization procedure, has been used. To elucidate the role of shell e ects on the structure shape of mass parabola, we have made comparison for the δU = 0 and δU ≠ 0 cases. For a particular mass value of isobaric family, the results show that with the inclusion of shell corrections i.e. δU ≠ 0, the minimum for the most stable isobar is strongly pronounced compared to the case without shell corrections. In other words, shell corrections significantly enhance the stability of stable isobar. The study reveals that the role of shell effects on the mass minima is more pronounced in heavy mass region compared to light and intermediate mass regions.

  11. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    PubMed

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  12. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    PubMed Central

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  13. Efficient management of marine resources in conflict: an empirical study of marine sand mining, Korea.

    PubMed

    Kim, Tae-Goun

    2009-10-01

    This article develops a dynamic model of efficient use of exhaustible marine sand resources in the context of marine mining externalities. The classical Hotelling extraction model is applied to sand mining in Ongjin, Korea and extended to include the estimated marginal external costs that mining imposes on marine fisheries. The socially efficient sand extraction plan is compared with the extraction paths suggested by scientific research. If marginal environmental costs are correctly estimated, the developed efficient extraction plan considering the resource rent may increase the social welfare and reduce the conflicts among the marine sand resource users. The empirical results are interpreted with an emphasis on guidelines for coastal resource management policy.

  14. Reconstruction of an infrared band of meteorological satellite imagery with abductive networks

    NASA Technical Reports Server (NTRS)

    Singer, Harvey A.; Cockayne, John E.; Versteegen, Peter L.

    1995-01-01

    As the current fleet of meteorological satellites age, the accuracy of the imagery sensed on a spectral channel of the image scanning system is continually and progressively degraded by noise. In time, that data may even become unusable. We describe a novel approach to the reconstruction of the noisy satellite imagery according to empirical functional relationships that tie the spectral channels together. Abductive networks are applied to automatically learn the empirical functional relationships between the data sensed on the other spectral channels to calculate the data that should have been sensed on the corrupted channel. Using imagery unaffected by noise, it is demonstrated that abductive networks correctly predict the noise-free observed data.

  15. Combining DSMC Simulations and ROSINA/COPS Data of Comet 67P/Churyumov-Gerasimenko to Develop a Realistic Empirical Coma Model and to Determine Accurate Production Rates

    NASA Astrophysics Data System (ADS)

    Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.

    2015-12-01

    We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).

  16. [Is it possible a bioethics based on the experimental evidence?].

    PubMed

    Pastor, Luis Miguel

    2013-01-01

    For years there are different types of criticism about principialist bioethics. One alternative that has been proposed is to introduce empirical evidence within the bioethical discourse to make it less formal, less theoretical and closer to reality. In this paper we analyze first in synthetic form diverse alternative proposals to make an empirical bioethics. Some of them are strongly naturalistic while others aim to provide empirical data only for correct or improve bioethical work. Most of them are not shown in favor of maintaining a complete separation between facts and values, between what is and what ought to be. With different nuances these proposals of moderate naturalism make ethical judgments depend normative social opinion resulting into a certain social naturalism. Against these proposals we think to make a bioethics in that relates the empirical facts with ethical duties, we must rediscover empirical reality of human action. Only from it and, in particular, from the activity of discernment that makes practical reason, when judged on the object of his action, it is possible to integrate the mere descriptive facts with ethical judgments of character prescriptive. In conclusion we think that it is not possible to perform bioethics a mode of empirical science, as this would be contrary to natural reason, leading to a sort of scientific reductionism. At the same time we believe that empirical data are important in the development of bioethics and to enhance and improve the innate ability of human reason to discern good. From this discernment could develop a bioethics from the perspective of ethical agents themselves, avoiding the extremes of an excessive normative rationalism, accepting empirical data and not falling into a simple pragmatism.

  17. The Interface between Research on Individual Difference Variables and Teaching Practice: The Case of Cognitive Factors and Personality

    ERIC Educational Resources Information Center

    Biedron, Adriana; Pawlak, Miroslaw

    2016-01-01

    While a substantial body of empirical evidence has been accrued about the role of individual differences in second language acquisition, relatively little is still known about how factors of this kind can mediate the effects of instructional practices as well as how empirically-derived insights can inform foreign language pedagogy, both with…

  18. Careers in a Non-Western Context: An Exploratory Empirical Investigation of Factors Related to the Career Success of Chinese Managers

    ERIC Educational Resources Information Center

    Tu, Howard S.; Forret, Monica L.; Sullivan, Sherry E.

    2006-01-01

    Purpose: The aim of this paper is to conduct an exploratory empirical examination to determine if factors (e.g. demographic, human capital, motivational, and organizational) associated with career success in Western countries are also related to the career outcomes of Chinese managers. Design/methodology/approach: Survey data were obtained from…

  19. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  20. Certified dual-corrected radiation patterns of phased antenna arrays by offline–online order reduction of finite-element models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de

    2015-10-15

    This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.

  1. Absolute distance measurement with correction of air refractive index by using two-color dispersive interferometry.

    PubMed

    Wu, Hanzhong; Zhang, Fumin; Liu, Tingyang; Li, Jianshuang; Qu, Xinghua

    2016-10-17

    Two-color interferometry is powerful for the correction of the air refractive index especially in the turbulent air over long distance, since the empirical equations could introduce considerable measurement uncertainty if the environmental parameters cannot be measured with sufficient precision. In this paper, we demonstrate a method for absolute distance measurement with high-accuracy correction of air refractive index using two-color dispersive interferometry. The distances corresponding to the two wavelengths can be measured via the spectrograms captured by a CCD camera pair in real time. In the long-term experiment of the correction of air refractive index, the experimental results show a standard deviation of 3.3 × 10-8 for 12-h continuous measurement without the precise knowledge of the environmental conditions, while the variation of the air refractive index is about 2 × 10-6. In the case of absolute distance measurement, the comparison with the fringe counting interferometer shows an agreement within 2.5 μm in 12 m range.

  2. The polarized electron beam at ELSA

    NASA Astrophysics Data System (ADS)

    Hoffmann, M.; Drachenfels, W. V.; Frommberger, F.; Gowin, M.; Helbing, K.; Hillert, W.; Husmann, D.; Keil, J.; Michel, T.; Naumann, J.; Speckner, T.; Zeitler, G.

    2001-06-01

    The future medium energy physics program at the electron stretcher accelerator ELSA of Bonn University mainly relies on experiments using polarized electrons in the energy range from 1 to 3.2 GeV. To provide a polarized beam with high polarization and sufficient intensity a dedicated source has been developed and set into operation. To prevent depolarization during acceleration in the circular accelerators several depolarizing resonances have to be corrected for. Intrinsic resonances are compensated using two pulsed betatron tune jump quadrupoles. The influence of imperfection resonances is successfully reduced applying a dynamic closed orbit correction in combination with an empirical harmonic correction on the energy ramp. In order to minimize beam depolarization, both types of resonances and the correction techniques have been studied in detail. It turned out that the polarization in ELSA can be conserved up to 2.5 GeV and partially up to 3.2 GeV which is demonstrated by measurements using a Møller polarimeter installed in the external GDH1-beamline. .

  3. In search of a corrected prescription drug elasticity estimate: a meta-regression approach.

    PubMed

    Gemmill, Marin C; Costa-Font, Joan; McGuire, Alistair

    2007-06-01

    An understanding of the relationship between cost sharing and drug consumption depends on consistent and unbiased price elasticity estimates. However, there is wide heterogeneity among studies, which constrains the applicability of elasticity estimates for empirical purposes and policy simulation. This paper attempts to provide a corrected measure of the drug price elasticity by employing meta-regression analysis (MRA). The results indicate that the elasticity estimates are significantly different from zero, and the corrected elasticity is -0.209 when the results are made robust to heteroskedasticity and clustering of observations. Elasticity values are higher when the study was published in an economic journal, when the study employed a greater number of observations, and when the study used aggregate data. Elasticity estimates are lower when the institutional setting was a tax-based health insurance system.

  4. Remote Sensing of Vineyard FPAR, with Implications for Irrigation Scheduling

    NASA Technical Reports Server (NTRS)

    Johnson, Lee F.; Scholasch, Thibaut

    2004-01-01

    Normalized difference vegetation index (NDVI) data, acquired at two-meter resolution by an airborne ADAR System 5500, were compared with fraction of photosynthetically active radiation (FPAR) absorbed by commercial vineyards in Napa Valley, California. An empirical line correction was used to transform image digital counts to surface reflectance. "Apparent" NDVI (generated from digital counts) and "corrected" NDVI (from reflectance) were both strongly related to FPAR of range 0.14-0.50 (both r(sup 2) = 0.97, P < 0.01). By suppressing noise, corrected NDVI should form a more spatially and temporally stable relationship with FPAR, reducing the need for repeated field support. Study results suggest the possibility of using optical remote sensing to monitor the transpiration crop coefficient, thus providing an enhanced spatial resolution component to crop water budget calculations and irrigation management.

  5. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  6. Why Psychology Cannot be an Empirical Science.

    PubMed

    Smedslund, Jan

    2016-06-01

    The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.

  7. Empirical factors and structure transference: Returning to the London account

    NASA Astrophysics Data System (ADS)

    Bueno, Otávio; French, Steven; Ladyman, James

    2012-05-01

    We offer a framework to represent the roles of empirical and theoretical factors in theory construction, and examine a case study to illustrate how the framework can be used to illuminate central features of scientific reasoning. The case study provides an extension of French and Ladyman's (1997) analysis of Fritz and Heinz London's model of superconductivity to accommodate the role of the analogy between superconductivity and diamagnetic phenomena in the development of the model between 1935 and 1937. We focus on this case since it allows us to separate the roles of empirical and theoretical factors, and so provides an example of the utility of the approach that we have adopted. We conclude the paper by drawing on the particular framework here developed to address a range of concerns.

  8. The Time Series Technique for Aerosol Retrievals over Land from MODIS: Algorithm MAIAC

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Wang, Yujie

    2008-01-01

    Atmospheric aerosols interact with sun light by scattering and absorbing radiation. By changing irradiance of the Earth surface, modifying cloud fractional cover and microphysical properties and a number of other mechanisms, they affect the energy balance, hydrological cycle, and planetary climate [IPCC, 2007]. In many world regions there is a growing impact of aerosols on air quality and human health. The Earth Observing System [NASA, 1999] initiated high quality global Earth observations and operational aerosol retrievals over land. With the wide swath (2300 km) of MODIS instrument, the MODIS Dark Target algorithm [Kaufman et al., 1997; Remer et al., 2005; Levy et al., 2007] currently complemented with the Deep Blue method [Hsu et al., 2004] provides daily global view of planetary atmospheric aerosol. The MISR algorithm [Martonchik et al., 1998; Diner et al., 2005] makes high quality aerosol retrievals in 300 km swaths covering the globe in 8 days. With MODIS aerosol program being very successful, there are still several unresolved issues in the retrieval algorithms. The current processing is pixel-based and relies on a single-orbit data. Such an approach produces a single measurement for every pixel characterized by two main unknowns, aerosol optical thickness (AOT) and surface reflectance (SR). This lack of information constitutes a fundamental problem of the remote sensing which cannot be resolved without a priori information. For example, MODIS Dark Target algorithm makes spectral assumptions about surface reflectance, whereas the Deep Blue method uses ancillary global database of surface reflectance composed from minimal monthly measurements with Rayleigh correction. Both algorithms use Lambertian surface model. The surface-related assumptions in the aerosol retrievals may affect subsequent atmospheric correction in unintended way. For example, the Dark Target algorithm uses an empirical relationship to predict SR in the Blue (B3) and Red (B1) bands from the 2.1 m channel (B7) for the purpose of aerosol retrieval. Obviously, the subsequent atmospheric correction will produce the same SR in the red and blue bands as predicted, i.e. an empirical function of 2.1. In other words, the spectral, spatial and temporal variability of surface reflectance in the Blue and Red bands appears borrowed from band B7. This may have certain implications for the vegetation and global carbon analysis because the chlorophyll-sensing bands B1, B3 are effectively substituted in terms of variability by band B7, which is sensitive to the plant liquid water. This chapter describes a new recently developed generic aerosol-surface retrieval algorithm for MODIS. The Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm simultaneously retrieves AOT and surface bi-directional reflection factor (BRF) using the time series of MODIS measurements.

  9. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  10. A novel method for structure-based prediction of ion channel conductance properties.

    PubMed Central

    Smart, O S; Breed, J; Smith, G R; Sansom, M S

    1997-01-01

    A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559

  11. Multi-step-ahead crude oil price forecasting using a hybrid grey wave model

    NASA Astrophysics Data System (ADS)

    Chen, Yanhui; Zhang, Chuan; He, Kaijian; Zheng, Aibing

    2018-07-01

    Crude oil is crucial to the operation and economic well-being of the modern society. Huge changes of crude oil price always cause panics to the global economy. There are many factors influencing crude oil price. Crude oil price prediction is still a difficult research problem widely discussed among researchers. Based on the researches on Heterogeneous Market Hypothesis and the relationship between crude oil price and macroeconomic factors, exchange market, stock market, this paper proposes a hybrid grey wave forecasting model, which combines Random Walk (RW)/ARMA to forecast multi-step-ahead crude oil price. More specifically, we use grey wave forecasting model to model the periodical characteristics of crude oil price and ARMA/RW to simulate the daily random movements. The innovation also comes from using the information of the time series graph to forecast crude oil price, since grey wave forecasting is a graphical prediction method. The empirical results demonstrate that based on the daily data of crude oil price, the hybrid grey wave forecasting model performs well in 15- to 20-step-ahead prediction and it always dominates ARMA and Random Walk in correct direction prediction.

  12. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  13. Evaluation of factors affecting accurate measurements of atmospheric CO2 and CH4 by wavelength-scanned cavity ring-down spectroscopy

    NASA Astrophysics Data System (ADS)

    Nara, H.; Tanimoto, H.; Tohjima, Y.; Mukai, H.; Nojiri, Y.; Katsumata, K.; Rella, C.

    2012-07-01

    We examined potential interferences from water vapor and atmospheric background gases (N2, O2, and Ar), and biases by isotopologues of target species, on accurate measurement of atmospheric CO2 and CH4 by means of wavelength-scanned cavity ring-down spectroscopy (WS-CRDS). Variations in the composition of the background gas substantially impacted the CO2 and CH4 measurements: the measured amounts of CO2 and CH4 decreased with increasing N2 mole fraction, but increased with increasing O2 and Ar, suggesting that the pressure-broadening effects (PBEs) increased as Ar < O2 < N2. Using these experimental results, we inferred PBEs for the measurement of synthetic standard gases. The PBEs were negligible (up to 0.05 ppm for CO2 and 0.01 ppb for CH4) for gas standards balanced with purified air, although the PBEs were substantial (up to 0.87 ppm for CO2 and 1.4 ppb for CH4) for standards balanced with synthetic air. For isotopic biases on CO2 measurements, we compared experimental results and theoretical calculations, which showed excellent agreement within their uncertainty. We derived empirical correction functions for water vapor for three WS-CRDS instruments (Picarro EnviroSense 3000i, G-1301, and G-2301). Although the transferability of the functions was not clear, no significant difference was found in the water vapor correction values among these instruments within the typical analytical precision at sufficiently low water concentrations (< 0.3%V for CO2 and < 0.4%V for CH4). For accurate measurements of CO2 and CH4 in ambient air, we concluded that WS-CRDS measurements should be performed under complete dehumidification of air samples, or moderate dehumidification followed by application of a water vapor correction function, along with calibration by natural air-based standard gases or purified air-balanced synthetic standard gases with isotopic correction.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  15. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  16. You Learn by Your Mistakes. Effective Training Strategies Based on the Analysis of Video-Recorded Worked-Out Examples

    ERIC Educational Resources Information Center

    Cattaneo, Alberto A. P.; Boldrini, Elena

    2017-01-01

    This paper presents an empirical study on procedural learning from errors that was conducted within the field of vocational education. It examines whether, and to what extent, procedural learning can benefit more from the detection and written analysis of errors (experimental condition) than from the correct elements (control group). The study…

  17. Are Replication Studies Possible in Qualitative Second/Foreign Language Classroom Research? A Call for Comparative Re-Production Research

    ERIC Educational Resources Information Center

    Markee, Numa

    2017-01-01

    A widely accepted orthodoxy is that it is impossible to do replication studies within qualitative research paradigms. Ontologically and epistemologically speaking, such a view is largely correct. However, in this paper, I propose that what I call comparative re-production research--that is, the empirical study of qualitative phenomena that occur…

  18. Equine-Facilitated Prison-Based Programs within the Context of Prison-Based Animal Programs: State of the Science Review

    ERIC Educational Resources Information Center

    Bachi, Keren

    2013-01-01

    Equine-facilitated prison programs have become more prevalent and operate in correctional facilities in 13 states throughout the United States. However, there is a deficit of empirical knowledge to guide them. This article reviews 19 studies of prison-based animal programs and centers on patterns in the literature. It reveals how previous studies…

  19. Should social psychologists create a disciplinary affirmative action program for political conservatives?

    PubMed

    Shweder, Richard A

    2015-01-01

    Freely staying on the move between alternative points of view is the best antidote to dogmatism. Robert Merton's ideals for an epistemic community are sufficient to correct pseudo-empirical studies designed to confirm beliefs that liberals (or conservatives) think deserve to be true. Institutionalizing the self-proclaimed political identities of social psychologists may make things worse.

  20. A Systematic Assessment of "None of the Above" on Multiple Choice Tests in a First Year Psychology Classroom

    ERIC Educational Resources Information Center

    Pachai, Matthew V.; DiBattista, David; Kim, Joseph A.

    2015-01-01

    Multiple choice writing guidelines are decidedly split on the use of "none of the above" (NOTA), with some authors discouraging and others advocating its use. Moreover, empirical studies of NOTA have produced mixed results. Generally, these studies have utilized NOTA as either the correct response or a distractor and assessed its effect…

  1. Cooperative Factors, Cooperative Innovation Effect and Innovation Performance for Chinese Firms: an Empirical Study

    NASA Astrophysics Data System (ADS)

    Xie, Xuemei

    Based on a survey to 1206 Chinese firms, this paper empirically explores the factors impacting cooperative innovation effect of firms, and seeks to explore the relationship between cooperative innovation effect (CIE) and innovation performance using the technique of Structural Equation Modeling (SEM). The study finds there are significant positive relationships between basic sustaining factors, factors of government and policy, factors of cooperation mechanism and social network, and cooperative innovation effect. However, the result reveals that factors of government and policy demonstrate little impact on the CIE of firms compared with other factors. It is hoped that the findings can pave the way for future studies in improving cooperative innovation capacity for firms in emerging countries.

  2. Comparison of the lifting-line free vortex wake method and the blade-element-momentum theory regarding the simulated loads of multi-MW wind turbines

    NASA Astrophysics Data System (ADS)

    Hauptmann, S.; Bülk, M.; Schön, L.; Erbslöh, S.; Boorsma, K.; Grasso, F.; Kühn, M.; Cheng, P. W.

    2014-12-01

    Design load simulations for wind turbines are traditionally based on the blade- element-momentum theory (BEM). The BEM approach is derived from a simplified representation of the rotor aerodynamics and several semi-empirical correction models. A more sophisticated approach to account for the complex flow phenomena on wind turbine rotors can be found in the lifting-line free vortex wake method. This approach is based on a more physics based representation, especially for global flow effects. This theory relies on empirical correction models only for the local flow effects, which are associated with the boundary layer of the rotor blades. In this paper the lifting-line free vortex wake method is compared to a state- of-the-art BEM formulation with regard to aerodynamic and aeroelastic load simulations of the 5MW UpWind reference wind turbine. Different aerodynamic load situations as well as standardised design load cases that are sensitive to the aeroelastic modelling are evaluated in detail. This benchmark makes use of the AeroModule developed by ECN, which has been coupled to the multibody simulation code SIMPACK.

  3. An efficient empirical Bayes method for genomewide association studies.

    PubMed

    Wang, Q; Wei, J; Pan, Y; Xu, S

    2016-08-01

    Linear mixed model (LMM) is one of the most popular methods for genomewide association studies (GWAS). Numerous forms of LMM have been developed; however, there are two major issues in GWAS that have not been fully addressed before. The two issues are (i) the genomic background noise and (ii) low statistical power after Bonferroni correction. We proposed an empirical Bayes (EB) method by assigning each marker effect a normal prior distribution, resulting in shrinkage estimates of marker effects. We found that such a shrinkage approach can selectively shrink marker effects and reduce the noise level to zero for majority of non-associated markers. In the meantime, the EB method allows us to use an 'effective number of tests' to perform Bonferroni correction for multiple tests. Simulation studies for both human and pig data showed that EB method can significantly increase statistical power compared with the widely used exact GWAS methods, such as GEMMA and FaST-LMM-Select. Real data analyses in human breast cancer identified improved detection signals for markers previously known to be associated with breast cancer. We therefore believe that EB method is a valuable tool for identifying the genetic basis of complex traits. © 2015 Blackwell Verlag GmbH.

  4. Comparison of Surface Elevation Changes of the Greenland and Antarctic Ice Sheets from Radar and Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Zwally, H. Jay; Brenner, Anita C.; Barbieri, Kristine; DiMarzio, John P.; Li, Jun; Robbins, John; Saba, Jack L.; Yi, Donghui

    2012-01-01

    A primary purpose of satellite altimeter measurements is determination of the mass balances of the Greenland and Antarctic ice sheets and changes with time by measurement of changes in the surface elevations. Since the early 1990's, important measurements for this purpose have been made by radar altimeters on ERS-l and 2, Envisat, and CryoSat and a laser altimeter on ICESat. One principal factor limiting direct comparisons between radar and laser measurements is the variable penetration depth of the radar signal and the corresponding location of the effective depth of the radar-measured elevation beneath the surface, in contrast to the laser-measured surface elevation. Although the radar penetration depth varies significantly both spatially and temporally, empirical corrections have been developed to account for this effect. Another limiting factor in direct comparisons is caused by differences in the size of the laser and radar footprints and their respective horizontal locations on the surface. Nevertheless, derived changes in elevation, dHldt, and time-series of elevation, H(t), have been shown to be comparable. For comparisons at different times, corrections for elevation changes caused by variations in the rate offrrn compaction have also been developed. Comparisons between the H(t) and the average dH/dt at some specific locations, such as the Vostok region of East Antarctic, show good agreement among results from ERS-l and 2, Envisat, and ICESat. However, Greenland maps of dHidt from Envisat and ICESat for the same time periods (2003-2008) show some areas of significant differences as well as areas of good agreement. Possible causes of residual differences are investigated and described.

  5. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  6. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. Inference With Difference-in-Differences With a Small Number of Groups: A Review, Simulation Study, and Empirical Application Using SHARE Data.

    PubMed

    Rokicki, Slawa; Cohen, Jessica; Fink, Günther; Salomon, Joshua A; Landrum, Mary Beth

    2018-01-01

    Difference-in-differences (DID) estimation has become increasingly popular as an approach to evaluate the effect of a group-level policy on individual-level outcomes. Several statistical methodologies have been proposed to correct for the within-group correlation of model errors resulting from the clustering of data. Little is known about how well these corrections perform with the often small number of groups observed in health research using longitudinal data. First, we review the most commonly used modeling solutions in DID estimation for panel data, including generalized estimating equations (GEE), permutation tests, clustered standard errors (CSE), wild cluster bootstrapping, and aggregation. Second, we compare the empirical coverage rates and power of these methods using a Monte Carlo simulation study in scenarios in which we vary the degree of error correlation, the group size balance, and the proportion of treated groups. Third, we provide an empirical example using the Survey of Health, Ageing, and Retirement in Europe. When the number of groups is small, CSE are systematically biased downwards in scenarios when data are unbalanced or when there is a low proportion of treated groups. This can result in over-rejection of the null even when data are composed of up to 50 groups. Aggregation, permutation tests, bias-adjusted GEE, and wild cluster bootstrap produce coverage rates close to the nominal rate for almost all scenarios, though GEE may suffer from low power. In DID estimation with a small number of groups, analysis using aggregation, permutation tests, wild cluster bootstrap, or bias-adjusted GEE is recommended.

  8. Dry Bias and Variability in Vaisala RS80-H Radiosondes: The ARM Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, David D.; Lesht, B. M.; Clough, Shepard A.

    2003-01-02

    Thousands of comparisons between total precipitable water vapor (PWV) obtained from radiosonde (Vaisala RS80-H) profiles and PWV retrieved from a collocated microwave radiometer (MWR) were made at the Atmospheric Radiation Measurement (ARM) Program's Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site in northern Oklahoma from 1994 to 2000. These comparisons show that the RS80-H radiosonde has an approximate 5% dry bias compared to the MWR. This observation is consistent with interpretations of Vaisala RS80 radiosonde data obtained during the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA/COARE). In addition to the dry bias, analysis of the PWVmore » comparisons as well as of data obtained from dual-sonde soundings done at the SGP show that the calibration of the radiosonde humidity measurements varies considerably both when the radiosondes come from different calibration batches and when the radiosondes come from the same calibration batch. This variability can result in peak-to-peak differences between radiosondes of greater than 25% in PWV. Because accurate representation of the vertical profile of water vapor is critical for ARM's science objectives, we have developed an empirical method for correcting the radiosonde humidity profiles that is based on a constant scaling factor. By using an independent set of observations and radiative transfer models to test the correction, we show that the constant humidity scaling method appears both to improve the accuracy and reduce the uncertainty of the radiosonde data. We also used the ARM data to examine a different, physically-based, correction scheme that was developed recently by scientists from Vaisala and the National Center for Atmospheric Research (NCAR). This scheme, which addresses the dry bias problem as well as other calibration-related problems with the RS80-H sensor, results in excellent agreement between the PWV retrieved from the MWR and integrated from the corrected radiosonde. However, because the physically-based correction scheme does not address the apparently random calibration variations we observe, it does not reduce the variability either between radiosonde calibration batches or within individual calibration batches.« less

  9. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  10. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  11. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  13. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  14. Detection and localization of change points in temporal networks with the aid of stochastic block models

    NASA Astrophysics Data System (ADS)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  15. Homogenized total ozone data records from the European sensors GOME/ERS-2, SCIAMACHY/Envisat, and GOME-2/MetOp-A

    NASA Astrophysics Data System (ADS)

    Lerot, C.; Van Roozendael, M.; Spurr, R.; Loyola, D.; Coldewey-Egbers, M.; Kochenova, S.; van Gent, J.; Koukouli, M.; Balis, D.; Lambert, J.-C.; Granville, J.; Zehner, C.

    2014-02-01

    Within the European Space Agency's Climate Change Initiative, total ozone column records from GOME (Global Ozone Monitoring Experiment), SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY), and GOME-2 have been reprocessed with GODFIT version 3 (GOME-type Direct FITting). This algorithm is based on the direct fitting of reflectances simulated in the Huggins bands to the observations. We report on new developments in the algorithm from the version implemented in the operational GOME Data Processor v5. The a priori ozone profile database TOMSv8 is now combined with a recently compiled OMI/MLS tropospheric ozone climatology to improve the representativeness of a priori information. The Ring procedure that corrects simulated radiances for the rotational Raman inelastic scattering signature has been improved using a revised semi-empirical expression. Correction factors are also applied to the simulated spectra to account for atmospheric polarization. In addition, the computational performance has been significantly enhanced through the implementation of new radiative transfer tools based on principal component analysis of the optical properties. Furthermore, a soft-calibration scheme for measured reflectances and based on selected Brewer measurements has been developed in order to reduce the impact of level-1 errors. This soft-calibration corrects not only for possible biases in backscattered reflectances, but also for artificial spectral features interfering with the ozone signature. Intersensor comparisons and ground-based validation indicate that these ozone data sets are of unprecedented quality, with stability better than 1% per decade, a precision of 1.7%, and systematic uncertainties less than 3.6% over a wide range of atmospheric states.

  16. Unabated global surface temperature warming: evaluating the evidence

    NASA Astrophysics Data System (ADS)

    Karl, T. R.; Arguez, A.

    2015-12-01

    New insights related to time-dependent bias corrections in global surface temperatures have led to higher rates of warming over the past few decades than previously reported in the IPCC Fifth Assessment Report (2014). Record high global temperatures in the past few years have also contributed to larger trends. The combination of these factors and new analyses of the rate of temperature change show unabated global warming since at least the mid-Twentieth Century. New time-dependent bias corrections account for: (1) differences in temperatures measured from ships and drifting buoys; (2) improved corrections to ship measured temperatures; and (3) the larger rates of warming in polar regions (particularly the Arctic). Since 1951, the period over which IPCC (2014) attributes over half of the observed global warming to human causes, it is shown that there has been a remarkably robust and sustained warming, punctuated with inter-annual and decadal variability. This finding is confirmed through simple trend analysis and Empirical Mode Decomposition (EMD). Trend analysis however, especially for decadal trends, is sensitive to selection bias of beginning and ending dates. EMD has no selection bias. Additionally, it can highlight both short- and long-term processes affecting the global temperature times series since it addresses both non-linear and non-stationary processes. For the new NOAA global temperature data set, our analyses do not support the notion of a hiatus or slowing of long-term global warming. However, sub-decadal periods of little (or no warming) and rapid warming can also be found, clearly showing the impact of inter-annual and decadal variability that previously has been attributed to both natural and human-induced non-greenhouse forcings.

  17. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  18. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  19. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  20. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  1. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    PubMed

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  2. Accurate density functional prediction of molecular electron affinity with the scaling corrected Kohn–Sham frontier orbital energies

    NASA Astrophysics Data System (ADS)

    Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao

    2018-04-01

    Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.

  3. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

    PubMed Central

    Ho, Qirong; Cipar, James; Cui, Henggang; Kim, Jin Kyu; Lee, Seunghak; Gibbons, Phillip B.; Gibson, Garth A.; Ganger, Gregory R.; Xing, Eric P.

    2014-01-01

    We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model’s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. PMID:25400488

  4. The idiosyncratic nature of confidence

    PubMed Central

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-01-01

    Confidence is the ‘feeling of knowing’ that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence. PMID:29152591

  5. Partners or Partners in Crime? The Relationship Between Criminal Associates and Criminogenic Thinking.

    PubMed

    Whited, William H; Wagar, Laura; Mandracchia, Jon T; Morgan, Robert D

    2017-04-01

    Meta-analyses examining the risk factors for recidivism have identified the importance of ties with criminal associates as well as thoughts and attitudes conducive to the continuance of criminal behavior (e.g., criminogenic thinking). Criminologists have theorized that a direct relationship exists between the association with criminal peers and the development of criminogenic thinking. The present study empirically explored the relationship between criminal associates and criminogenic thinking in 595 adult male inmates in the United States. It was hypothesized that the proportion of free time spent with and number of criminal associates would be associated with criminogenic thinking, as measured by two self-report instruments, the Measure of Offender Thinking Styles-Revised (MOTS-R) and the Psychological Inventory of Criminal Thinking Styles (PICTS). Hierarchal linear regression analyses demonstrated that the proportion of free time spent with criminal associates statistically predicted criminogenic thinking when controlling for demographic variables. The implications of these findings on correctional practice (including assessment and intervention) as well as future research are discussed.

  6. Lessons from Philippines MPA Management: Social Ecological Interactions, Participation, and MPA Performance

    NASA Astrophysics Data System (ADS)

    Twichell, Julia; Pollnac, Richard; Christie, Patrick

    2018-06-01

    International interest in increasing marine protected area (MPA) coverage reflects broad recognition of the MPA as a key tool for marine ecosystems and fisheries management. Nevertheless, effective management remains a significant challenge. The present study contributes to enriching an understanding of best practices for MPA management through analysis of archived community survey data collected in the Philippines by the Learning Project (LP), a collaboration with United States Coral Triangle Initiative (USCTI), United States Agency for International Development (USAID), and partners. We evaluate stakeholder participation and social ecological interactions among resource users in MPA programs in the Palawan, Occidental Mindoro, and Batangas provinces in the Philippines. Analysis indicates that a complex suite of social ecological factors, including demographics, conservation beliefs, and scientifically correct knowledge influence participation, which in turn is related to perceived MPA performance. Findings indicate positive feedbacks within the system that have potential to strengthen perceptions of MPA success. The results of this evaluation provide empirical reinforcement to current inquiries concerning the role of participation in influencing MPA performance.

  7. Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime

    NASA Astrophysics Data System (ADS)

    Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay

    2017-06-01

    We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.

  8. The relevance of temporal iconicity with instruction manuals for elderly users.

    PubMed

    Mertens, Alexander; Nick, Claudia; Krüger, Stefan; Schlick, Christopher M

    2012-01-01

    Gerontolinguistic obtains a growing importance with the increase of elderly users due to Demographic Change. Since acceptance and ease of use of supportive systems for elderly, such as "E-Nursing-Assistants", are highly dependent on the age suitable design of readable instructions, an age-appropriate linguistic concept is of high value for usability. There has been only little research on the relevance of foreign words, signal words, textual arrangement, optical accentuation of key terms and temporal iconicity concerning older users. Thus, an efficient design of age suitable manual instructions within a medical context still remains to be done. The objective of this research was to evaluate the relevance of the previously mentioned factors in the context of written instructions. For this, an empirical survey was designed which was given to 45 study participants. The subjects of the experiment were given 4x3 instructions after a pretest questionnaire. The aim was to execute these instructions as correctly and quickly as possible. Furthermore the instructions were rated regarding comprehensibility with a retrospective questionnaire.

  9. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  10. Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime

    NASA Astrophysics Data System (ADS)

    Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay

    2018-05-01

    We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.

  11. The role of serendipity in drug discovery

    PubMed Central

    Ban, Thomas A.

    2006-01-01

    Serendipity is one of the many factors that may contribute to drug discovery. It has played a role in the discovery of prototype psychotropic drugs that led to modern pharmacological treatment in psychiatry. It has also played a role in the discovery of several drugs that have had an impact on the development of psychiatry, “Serendipity” in drug discovery implies the finding of one thing while looking for something else. This was the case in six of the twelve serendipitous discoveries reviewed in this paper, ie, aniline purple, penicillin, lysergic acid diethylamide, meprobamate, chlorpromazine, and imipramine, in the case of three drugs, ie, potassium bromide, chloral hydrate, and lithium, the discovery was serendipitous because an utterly false rationale led to correct empirical results; and in case of two others, ie, iproniazid and sildenafil, because valuable indications were found for these drugs which were not initially those sought. The discovery of one of the twelve drugs, chlordiazepoxide, was sheer luck. PMID:17117615

  12. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  13. An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning

    PubMed Central

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-01-01

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197

  14. A one- and two-layer model for estimating evapotranspiration with remotely sensed surface temperature and ground-based meteorological data over partial canopy cover

    NASA Technical Reports Server (NTRS)

    Kustas, William P.; Choudhury, Bhaskar J.; Kunkel, Kenneth E.

    1989-01-01

    Surface-air temperature differences are commonly used in a bulk resistance equation for estimating sensible heat flux (H), which is inserted in the one-dimensional energy balance equation to solve for the latent heat flux (LE) as a residual. Serious discrepancies between estimated and measured LE have been observed for partial-canopy-cover conditions, which are mainly attributed to inappropriate estimates of H. To improve the estimates of H over sparse canopies, one- and two-layer resistance models that account for some of the factors causing poor agreement are developed. The utility of the two models is tested with remotely sensed and micrometeorological data for a furrowed cotton field with 20 percent cover and a dry soil surface. It is found that the one-layer model performs better than the two-layer model when a theoretical bluff-body correction for heat transfer is used instead of an empirical adjustment; otherwise, the two-layer model is better.

  15. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.

    PubMed

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-08-21

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.

  16. Semi empirical formula for exposure buildup factors

    NASA Astrophysics Data System (ADS)

    Seenappa, L.; Manjunatha, H. C.; Sridhar, K. N.; Hanumantharayappa, Chikka

    2017-10-01

    The nuclear data of photon buildup factor is an important concept that must be considered in nuclear safety aspects such as radiation shielding and dosimetry. The buildup factor is a coefficient that represents the contribution of collided photons with the target medium. Present work formulated a semi empirical formulae for exposure buildup factors (EBF) in the energy region 0.015-15 MeV, atomic number range 1 ≤ Z ≤ 92 and for mean free path up to 40 mfp. The EBFs produced by the present formula are compared with that of data available in the literature. It is found that present work agree with literature. This formula is first of its kind to calculate EBFs without using geometric progression fitting parameters. This formula may also use to calculate EBFs for compounds/mixtures/Biological samples. The present formula is useful in producing EBFs for elements and mixtures quickly. This semi empirical formula finds importance in the calculations of EBFs which intern helps in the radiation protection and dosimetry.

  17. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  18. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  19. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  20. Comments on baseline correction of digital strong-motion data: Examples from the 1999 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Boore, D.M.; Stephens, C.D.; Joyner, W.B.

    2002-01-01

    Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.

  1. Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs

    DTIC Science & Technology

    2014-09-01

    peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a

  2. Deterministic figure correction of piezoelectrically adjustable slumped glass optics

    NASA Astrophysics Data System (ADS)

    DeRoo, Casey T.; Allured, Ryan; Cotroneo, Vincenzo; Hertz, Edward; Marquez, Vanessa; Reid, Paul B.; Schwartz, Eric D.; Vikhlinin, Alexey A.; Trolier-McKinstry, Susan; Walker, Julian; Jackson, Thomas N.; Liu, Tianning; Tendulkar, Mohit

    2018-01-01

    Thin x-ray optics with high angular resolution (≤ 0.5 arcsec) over a wide field of view enable the study of a number of astrophysically important topics and feature prominently in Lynx, a next-generation x-ray observatory concept currently under NASA study. In an effort to address this technology need, piezoelectrically adjustable, thin mirror segments capable of figure correction after mounting and on-orbit are under development. We report on the fabrication and characterization of an adjustable cylindrical slumped glass optic. This optic has realized 100% piezoelectric cell yield and employs lithographically patterned traces and anisotropic conductive film connections to address the piezoelectric cells. In addition, the measured responses of the piezoelectric cells are found to be in good agreement with finite-element analysis models. While the optic as manufactured is outside the range of absolute figure correction, simulated corrections using the measured responses of the piezoelectric cells are found to improve 5 to 10 arcsec mirrors to 1 to 3 arcsec [half-power diameter (HPD), single reflection at 1 keV]. Moreover, a measured relative figure change which would correct the figure of a representative slumped glass piece from 6.7 to 1.2 arcsec HPD is empirically demonstrated. We employ finite-element analysis-modeled influence functions to understand the current frequency limitations of the correction algorithm employed and identify a path toward achieving subarcsecond corrections.

  3. The spectral basis of optimal error field correction on DIII-D

    DOE PAGES

    Paz-Soldan, Carlos A.; Buttery, Richard J.; Garofalo, Andrea M.; ...

    2014-04-28

    Here, experimental optimum error field correction (EFC) currents found in a wide breadth of dedicated experiments on DIII-D are shown to be consistent with the currents required to null the poloidal harmonics of the vacuum field which drive the kink mode near the plasma edge. This allows the identification of empirical metrics which predict optimal EFC currents with accuracy comparable to that of first- principles modeling which includes the ideal plasma response. While further metric refinements are desirable, this work suggests optimal EFC currents can be effectively fed-forward based purely on knowledge of the vacuum error field and basic equilibriummore » properties which are routinely calculated in real-time.« less

  4. Evaluation of the laboratory mouse model for screening topical mosquito repellents.

    PubMed

    Rutledge, L C; Gupta, R K; Wirtz, R A; Buescher, M D

    1994-12-01

    Eight commercial repellents were tested against Aedes aegypti 0 and 4 h after application in serial dilution to volunteers and laboratory mice. Results were analyzed by multiple regression of percentage of biting (probit scale) on dose (logarithmic scale) and time. Empirical correction terms for conversion of values obtained in tests on mice to values expected in tests on human volunteers were calculated from data obtained on 4 repellents and evaluated with data obtained on 4 others. Corrected values from tests on mice did not differ significantly from values obtained in tests on volunteers. Test materials used in the study were dimethyl phthalate, butopyronoxyl, butoxy polypropylene glycol, MGK Repellent 11, deet, ethyl hexanediol, Citronyl, and dibutyl phthalate.

  5. Clients' Retrospective Accounts of Corrective Experiences in Psychotherapy: An International, Multisite Collaboration.

    PubMed

    Constantino, Michael J; Angus, Lynne

    2017-02-01

    This article introduces a series of 4 original research reports that used varied qualitative methods for understanding an internationally diverse sample of clients' own accounts of corrective experiences (CEs), as they looked back on their completed psychotherapy. The basis for all studies, which were conducted across 4 different countries, was the Patients' Perceptions of Corrective Experiences in Individual Therapy (PPCEIT) semistructured interview protocol (Constantino, Angus, Friedlander, Messer, & Moertl, 2011). The PPCEIT interview assesses clients' retrospective accounts of aspects of self, other, and/or relationships that may have been corrected, and what they perceived as corrective experiences that facilitated such transformations. It also asks for specific, detailed examples of these accounts and experiences. Across all studies, the PPCEIT interview generated rich clinical material and resulting empirically generated themes that may inform clinical practice. After briefly defining the CE construct and highlighting a lack of research on clients' own accounts of such experiences, we describe the development of the PPCEIT interview (and provide the full interview manual and question protocol as appendices). We then summarize the foci of the culturally diverse reports in this series. © 2016 Wiley Periodicals, Inc.

  6. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology.

    PubMed

    Pak, Richard; McLaughlin, Anne Collins; Leidheiser, William; Rovira, Ericka

    2017-04-01

    A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.

  7. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  8. Toward an information-motivation-behavioral skills model of microbicide adherence in clinical trials.

    PubMed

    Ferrer, Rebecca A; Morrow, Kathleen M; Fisher, William A; Fisher, Jeffrey D

    2010-08-01

    Unless optimal adherence in microbicide clinical trials is ensured, an efficacious microbicide may be rejected after trial completion, or development of a promising microbicide may be stopped, because low adherence rates create the illusion of poor efficacy. We provide a framework with which to conceptualize and improve microbicide adherence in clinical trials, supported by a critical review of the empirical literature. The information-motivation-behavioral skills (IMB) model of microbicide adherence conceptualizes microbicide adherence in clinical trials and highlights factors that can be addressed in behavioral interventions to increase adherence in such trials. This model asserts that microbicide adherence-related information, motivation, and behavioral skills are fundamental determinants of adherent microbicide utilization. Specifically, information consists of objective facts about microbicide use (e.g., administration and dosage) as well as heuristics that facilitate use (e.g., microbicides must be used with all partners). Motivation to adhere consists of attitudes toward personal use of microbicides (e.g., evaluating the consequences of using microbicides as good or pleasant) as well as social norms that support their use (e.g., beliefs that a sexual partner approves use of microbicides). Behavioral skills consist of objective skills necessary for microbicide adherence (e.g., the ability to apply the microbicide correctly and consistently). Empirical evidence concerning microbicide acceptability and adherence to spermicides, medication, and condom use regimens support the utility of this model for understanding and promoting microbicide adherence in clinical trials.

  9. Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.

    PubMed

    Korth, Martin

    2013-10-14

    The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.

  10. Comparison of the GHSSmooth and the Rayleigh-Rice surface scatter theories

    NASA Astrophysics Data System (ADS)

    Harvey, James E.; Pfisterer, Richard N.

    2016-09-01

    The scalar-based GHSSmooth surface scatter theory results in an expression for the BRDF in terms of the surface PSD that is very similar to that provided by the rigorous Rayleigh-Rice (RR) vector perturbation theory. However it contains correction factors for two extreme situations not shared by the RR theory: (i) large incident or scattered angles that result in some portion of the scattered radiance distribution falling outside of the unit circle in direction cosine space, and (ii) the situation where the relevant rms surface roughness, σrel, is less than the total intrinsic rms roughness of the scattering surface. Also, the RR obliquity factor has been discovered to be an approximation of the more general GHSSmooth obliquity factor due to a little-known (or long-forgotten) implicit assumption in the RR theory that the surface autocovariance length is longer than the wavelength of the scattered radiation. This assumption allowed retaining only quadratic terms and lower in the series expansion for the cosine function, and results in reducing the validity of RR predictions for scattering angles greater than 60°. This inaccurate obliquity factor in the RR theory is also the cause of a complementary unrealistic "hook" at the high spatial frequency end of the predicted surface PSD when performing the inverse scattering problem. Furthermore, if we empirically substitute the polarization reflectance, Q, from the RR expression for the scalar reflectance, R, in the GHSSmooth expression, it inherits all of the polarization capabilities of the rigorous RR vector perturbation theory.

  11. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  12. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  13. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  14. Improved Formula for the Stress Intensity Factor of Semi-Elliptical Surface Cracks in Welded Joints under Bending Stress

    PubMed Central

    Peng, Yang; Wu, Chao; Zheng, Yifu; Dong, Jun

    2017-01-01

    Welded joints are prone to fatigue cracking with the existence of welding defects and bending stress. Fracture mechanics is a useful approach in which the fatigue life of the welded joint can be predicted. The key challenge of such predictions using fracture mechanics is how to accurately calculate the stress intensity factor (SIF). An empirical formula for calculating the SIF of welded joints under bending stress was developed by Baik, Yamada and Ishikawa based on the hybrid method. However, when calculating the SIF of a semi-elliptical crack, this study found that the accuracy of the Baik-Yamada formula was poor when comparing the benchmark results, experimental data and numerical results. The reasons for the reduced accuracy of the Baik-Yamada formula were identified and discussed in this paper. Furthermore, a new correction factor was developed and added to the Baik-Yamada formula by using theoretical analysis and numerical regression. Finally, the predictions using the modified Baik-Yamada formula were compared with the benchmark results, experimental data and numerical results. It was found that the accuracy of the modified Baik-Yamada formula was greatly improved. Therefore, it is proposed that this modified formula is used to conveniently and accurately calculate the SIF of semi-elliptical cracks in welded joints under bending stress. PMID:28772527

  15. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  16. Brief Report: Accuracy and Response Time for the Recognition of Facial Emotions in a Large Sample of Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M.; Begeer, Sander

    2014-01-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion…

  17. Attention focussing and anomaly detection in real-time systems monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, Richard J.; Chien, Steve A.; Fayyad, Usama M.; Porta, Harry J.

    1993-01-01

    In real-time monitoring situations, more information is not necessarily better. When faced with complex emergency situations, operators can experience information overload and a compromising of their ability to react quickly and correctly. We describe an approach to focusing operator attention in real-time systems monitoring based on a set of empirical and model-based measures for determining the relative importance of sensor data.

  18. Breakthroughs in Composition Instruction Methods without Evidence of Tangible Improvements in Students' Composition: When Will Change Come?

    ERIC Educational Resources Information Center

    Walsh, S. M.

    Throughout the early years of the twentieth century, literacy education was based on the solid understanding of grammar. Yet as early as 1923, empirical data indicated that the link between knowledge of grammar and correct use of English was tenuous at best. Despite formidable evidence, some educators still advocate the use of grammar as a…

  19. Adventures in Uncertainty: An Empirical Investigation of the Use of a Taylor's Series Approximation for the Assessment of Sampling Errors in Educational Research.

    ERIC Educational Resources Information Center

    Wilson, Mark

    This study investigates the accuracy of the Woodruff-Causey technique for estimating sampling errors for complex statistics. The technique may be applied when data are collected by using multistage clustered samples. The technique was chosen for study because of its relevance to the correct use of multivariate analyses in educational survey…

  20. An empirical relationship between mesoscale carbon monoxide concentrations and vehicular emission rates : final report.

    DOT National Transportation Integrated Search

    1979-01-01

    Presented is a relatively simple empirical equation that reasonably approximates the relationship between mesoscale carbon monoxide (CO) concentrations, areal vehicular CO emission rates, and the meteorological factors of wind speed and mixing height...

  1. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    NASA Astrophysics Data System (ADS)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.

  2. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  3. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation.

    PubMed

    Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R

    2016-08-15

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Two-step forecast of geomagnetic storm using coronal mass ejection and solar wind condition

    PubMed Central

    Kim, R-S; Moon, Y-J; Gopalswamy, N; Park, Y-D; Kim, Y-H

    2014-01-01

    To forecast geomagnetic storms, we had examined initially observed parameters of coronal mass ejections (CMEs) and introduced an empirical storm forecast model in a previous study. Now we suggest a two-step forecast considering not only CME parameters observed in the solar vicinity but also solar wind conditions near Earth to improve the forecast capability. We consider the empirical solar wind criteria derived in this study (Bz ≤ −5 nT or Ey ≥ 3 mV/m for t≥ 2 h for moderate storms with minimum Dst less than −50 nT) and a Dst model developed by Temerin and Li (2002, 2006) (TL model). Using 55 CME-Dst pairs during 1997 to 2003, our solar wind criteria produce slightly better forecasts for 31 storm events (90%) than the forecasts based on the TL model (87%). However, the latter produces better forecasts for 24 nonstorm events (88%), while the former correctly forecasts only 71% of them. We then performed the two-step forecast. The results are as follows: (i) for 15 events that are incorrectly forecasted using CME parameters, 12 cases (80%) can be properly predicted based on solar wind conditions; (ii) if we forecast a storm when both CME and solar wind conditions are satisfied (∩), the critical success index becomes higher than that from the forecast using CME parameters alone, however, only 25 storm events (81%) are correctly forecasted; and (iii) if we forecast a storm when either set of these conditions is satisfied (∪), all geomagnetic storms are correctly forecasted. PMID:26213515

  5. "Robust, replicable, and theoretically-grounded: A response to Brown and Coyne's (2017) commentary on the relationship between emodiversity and health": Correction to Quoidbach et al. (2018).

    PubMed

    2018-04-01

    Reports an error in "Robust, replicable, and theoretically-grounded: A response to Brown and Coyne's (2017) commentary on the relationship between emodiversity and health" by Jordi Quoidbach, Moïra Mikolajczak, June Gruber, Ilios Kotsou, Aleksandr Kogan and Michael I. Norton ( Journal of Experimental Psychology: General , 2018[Mar], Vol 147[3], 451-458). In the article, there is an error in the byline for the first author due to a printer error. The complete, correct institutional affiliation for Jordi Quoidbach is ESADE Business School, Ramon Llull University. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2018-06787-002.) In 2014 in the Journal of Experimental Psychology: General , we reported 2 studies demonstrating that the diversity of emotions that people experience-as measured by the Shannon-Wiener entropy index-was an independent predictor of mental and physical health, over and above the effect of mean levels of emotion. Brown and Coyne (2017) questioned both our use of Shannon's entropy and our analytic approach. We thank Brown and Coyne for their interest in our research; however, both their theoretical and empirical critiques do not undermine the central theoretical tenets and empirical findings of our research. We present an in-depth examination that reveals that our findings are statistically robust, replicable, and reflect a theoretically grounded phenomenon with real-world implications. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Electronic Structures of Anti-Ferromagnetic Tetraradicals: Ab Initio and Semi-Empirical Studies.

    PubMed

    Zhang, Dawei; Liu, Chungen

    2016-04-12

    The energy relationships and electronic structures of the lowest-lying spin states in several anti-ferromagnetic tetraradical model systems are studied with high-level ab initio and semi-empirical methods. The Full-CI method (FCI), the complete active space second-order perturbation theory (CASPT2), and the n-electron valence state perturbation theory (NEVPT2) are employed to obtain reference results. By comparing the energy relationships predicted from the Heisenberg and Hubbard models with ab initio benchmarks, the accuracy of the widely used Heisenberg model for anti-ferromagnetic spin-coupling in low-spin polyradicals is cautiously tested in this work. It is found that the strength of electron correlation (|U/t|) concerning anti-ferromagnetically coupled radical centers could range widely from strong to moderate correlation regimes and could become another degree of freedom besides the spin multiplicity. Accordingly, the Heisenberg-type model works well in the regime of strong correlation, which reproduces well the energy relationships along with the wave functions of all the spin states. In moderately spin-correlated tetraradicals, the results of the prototype Heisenberg model deviate severely from those of multi-reference electron correlation ab initio methods, while the extended Heisenberg model, containing four-body terms, can introduce reasonable corrections and maintains its accuracy in this condition. In the weak correlation regime, both the prototype Heisenberg model and its extended forms containing higher-order correction terms will encounter difficulties. Meanwhile, the Hubbard model shows balanced accuracy from strong to weak correlation cases and can reproduce qualitatively correct electronic structures, which makes it more suitable for the study of anti-ferromagnetic coupling in polyradical systems.

  7. Two-step forecast of geomagnetic storm using coronal mass ejection and solar wind condition.

    PubMed

    Kim, R-S; Moon, Y-J; Gopalswamy, N; Park, Y-D; Kim, Y-H

    2014-04-01

    To forecast geomagnetic storms, we had examined initially observed parameters of coronal mass ejections (CMEs) and introduced an empirical storm forecast model in a previous study. Now we suggest a two-step forecast considering not only CME parameters observed in the solar vicinity but also solar wind conditions near Earth to improve the forecast capability. We consider the empirical solar wind criteria derived in this study ( B z  ≤ -5 nT or E y  ≥ 3 mV/m for t ≥ 2 h for moderate storms with minimum Dst less than -50 nT) and a Dst model developed by Temerin and Li (2002, 2006) (TL model). Using 55 CME- Dst pairs during 1997 to 2003, our solar wind criteria produce slightly better forecasts for 31 storm events (90%) than the forecasts based on the TL model (87%). However, the latter produces better forecasts for 24 nonstorm events (88%), while the former correctly forecasts only 71% of them. We then performed the two-step forecast. The results are as follows: (i) for 15 events that are incorrectly forecasted using CME parameters, 12 cases (80%) can be properly predicted based on solar wind conditions; (ii) if we forecast a storm when both CME and solar wind conditions are satisfied (∩), the critical success index becomes higher than that from the forecast using CME parameters alone, however, only 25 storm events (81%) are correctly forecasted; and (iii) if we forecast a storm when either set of these conditions is satisfied (∪), all geomagnetic storms are correctly forecasted.

  8. Behavior, Sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation

    PubMed Central

    Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.

    2016-01-01

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606

  9. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  10. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  11. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  12. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  13. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  14. Empirical conversion of the vertical profile of reflectivity from Ku-band to S-band frequency

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Hong, Yang; Qi, Youcun; Wen, Yixin; Zhang, Jian; Gourley, Jonathan J.; Liao, Liang

    2013-02-01

    ABSTRACT This paper presents an empirical method for converting reflectivity from Ku-band (13.8 GHz) to S-band (2.8 GHz) for several hydrometeor species, which facilitates the incorporation of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements into quantitative precipitation estimation (QPE) products from the U.S. Next-Generation Radar (NEXRAD). The development of empirical dual-frequency relations is based on theoretical simulations, which have assumed appropriate scattering and microphysical models for liquid and solid hydrometeors (raindrops, snow, and ice/hail). Particle phase, shape, orientation, and density (especially for snow particles) have been considered in applying the T-matrix method to compute the scattering amplitudes. Gamma particle size distribution (PSD) is utilized to model the microphysical properties in the ice region, melting layer, and raining region of precipitating clouds. The variability of PSD parameters is considered to study the characteristics of dual-frequency reflectivity, especially the variations in radar dual-frequency ratio (DFR). The empirical relations between DFR and Ku-band reflectivity have been derived for particles in different regions within the vertical structure of precipitating clouds. The reflectivity conversion using the proposed empirical relations has been tested using real data collected by TRMM-PR and a prototype polarimetric WSR-88D (Weather Surveillance Radar 88 Doppler) radar, KOUN. The processing and analysis of collocated data demonstrate the validity of the proposed empirical relations and substantiate their practical significance for reflectivity conversion, which is essential to the TRMM-based vertical profile of reflectivity correction approach in improving NEXRAD-based QPE.

  15. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  16. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Atmospheric correction using near-infrared bands for satellite ocean color data processing in the turbid western Pacific region.

    PubMed

    Wang, Menghua; Shi, Wei; Jiang, Lide

    2012-01-16

    A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.

  18. Development of local calibration factors and design criteria values for mechanistic-empirical pavement design.

    DOT National Transportation Integrated Search

    2015-08-01

    A mechanistic-empirical (ME) pavement design procedure allows for analyzing and selecting pavement structures based : on predicted distress progression resulting from stresses and strains within the pavement over its design life. The Virginia : Depar...

  19. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  20. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  1. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  2. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  3. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  4. EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice

    NASA Astrophysics Data System (ADS)

    Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.

    2016-12-01

    The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.

  5. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons

    PubMed Central

    2014-01-01

    Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532

  6. HST/WFC3: understanding and mitigating radiation damage effects in the CCD detectors

    NASA Astrophysics Data System (ADS)

    Baggett, S. M.; Anderson, J.; Sosey, M.; Gosmeyer, C.; Bourque, M.; Bajaj, V.; Khandrika, H.; Martlin, C.

    2016-07-01

    At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel is a 4096x4096 pixel e2v CCD array. While these detectors continue to perform extremely well after more than 7 years in low-earth orbit, the cumulative effects of radiation damage are becoming increasingly evident. The result is a continual increase of the hotpixel population and the progressive loss in charge-transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. In this report, we summarize the radiation damage effects seen in WFC3/UVIS and the evolution of the CTE losses as a function of time, source brightness, and image-background level. In addition, we discuss the available mitigation options, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low-background images for a relatively small noise penalty. Currently, all WFC3 observers are encouraged to consider post-flash for images with low backgrounds. Finally, a pixel-based CTE correction is available for use after the images have been acquired. Similar to the software in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an observationally-defined model of how much charge is captured and released in order to reconstruct the image. As of Feb 2016, the pixel-based CTE correction is part of the automated WFC3 calibration pipeline. Observers with pre-existing data may request their images from MAST (Mikulski Archive for Space Telescopes) to obtain the improved products.

  7. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  8. Empiric antibiotic therapy in urinary tract infection in patients with risk factors for antibiotic resistance in a German emergency department.

    PubMed

    Bischoff, Sebastian; Walter, Thomas; Gerigk, Marlis; Ebert, Matthias; Vogelmann, Roger

    2018-01-26

    The aim of this study was to identify clinical risk factors for antimicrobial resistances and multidrug resistance (MDR) in urinary tract infections (UTI) in an emergency department in order to improve empirical therapy. UTI cases from an emergency department (ED) during January 2013 and June 2015 were analyzed. Differences between patients with and without resistances towards Ciprofloxacin, Piperacillin with Tazobactam (Pip/taz), Gentamicin, Cefuroxime, Cefpodoxime and Ceftazidime were analyzed with Fisher's exact tests. Results were used to identify risk factors with logistic regression modelling. Susceptibility rates were analyzed in relation to risk factors. One hundred thirty-seven of four hundred sixty-nine patients who met the criteria of UTI had a positive urine culture. An MDR pathogen was found in 36.5% of these. Overall susceptibility was less than 85% for standard antimicrobial agents. Logistic regression identified residence in nursing homes, male gender, hospitalization within the last 30 days, renal transplantation, antibiotic treatment within the last 30 days, indwelling urinary catheter and recurrent UTI as risk factors for MDR or any of these resistances. For patients with no risk factors Ciprofloxacin had 90%, Pip/taz 88%, Gentamicin 95%, Cefuroxime 98%, Cefpodoxime 98% and Ceftazidime 100% susceptibility. For patients with 1 risk factor Ciprofloxacin had 80%, Pip/taz 80%, Gentamicin 88%, Cefuroxime 78%, Cefpodoxime 78% and Ceftazidime 83% susceptibility. For 2 or more risk factors Ciprofloxacin drops its susceptibility to 52%, Cefuroxime to 54% and Cefpodoxime to 61%. Pip/taz, Gentamicin and Ceftazidime remain at 75% and 77%, respectively. We identified several risk factors for resistances and MDR in UTI. Susceptibility towards antimicrobials depends on these risk factors. With no risk factor cephalosporins seem to be the best choice for empiric therapy, but in patients with risk factors the beta-lactam penicillin Piperacillin with Tazobactam is an equal or better choice compared to fluoroquinolones, cephalosporins or gentamicin. This study highlights the importance of monitoring local resistance rates and its risk factors in order to improve empiric therapy in a local environment.

  9. Pre- and Post-equinox ROSINA production rates calculated using a realistic empirical coma model derived from AMPS-DSMC simulations of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu

    2016-04-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.

  10. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  11. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  12. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  13. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  14. Antibiotic susceptibility of Gram-negatives isolated from bacteremia in children with cancer. Implications for empirical therapy of febrile neutropenia.

    PubMed

    Castagnola, Elio; Caviglia, Ilaria; Pescetto, Luisa; Bagnasco, Francesca; Haupt, Riccardo; Bandettini, Roberto

    2015-01-01

    Monotherapy is recommended as the first choice for initial empirical therapy of febrile neutropenia, but local epidemiological and antibiotic susceptibility data are now considered pivotal to design a correct management strategy. To evaluate the proportion of Gram-negative rods isolated in bloodstream infections in children with cancer resistant to antibiotics recommended for this indication. The in vitro susceptibility to ceftazidime, piperacillin-tazobactam, meropenem and amikacin of Gram-negatives isolated in bacteremic episodes in children with cancer followed at the Istituto "Giannina Gaslini", Genoa, Italy in the period of 2001-2013 was retrospectively analyzed using the definitions recommended by EUCAST in 2014. Data were analyzed for any single drug and to the combination of amikacin with each β-lactam. The combination was considered effective in absence of concomitant resistance to both drugs, and not evaluated by means of in vitro analysis of antibiotic combinations (e.g., checkerboard). A total of 263 strains were evaluated: 27% were resistant to piperacillin-tazobactam, 23% to ceftazidime, 12% to meropenem and 13% to amikacin. Concomitant resistance to β-lactam and amikacin was detected in 6% of strains for piperacillin-tazobactam, 5% for ceftazidime and 5% for meropenem. During the study period there was a nonsignificant increase in the proportions of strains resistant to β-lactams indicated for monotherapy, and also increase in the resistance to combined therapies. in an era of increasing resistance to antibiotics guideline-recommended monotherapy could be not appropriate for initial empirical therapy of febrile neutropenia. Strict local survey on etiology and antibiotic susceptibility is mandatory for a correct management of this complication in cancer patients.

  15. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  16. An experimental study of recombination and polarity effect in a set of customized plane parallel ionization chambers.

    PubMed

    Kron, T; McNiven, A; Witruk, B; Kenny, M; Battista, J

    2006-12-01

    Plane parallel ionization chambers are an important tool for dosimetry and absolute calibration of electron beams used for radiotherapy. Most dosimetric protocols require corrections for recombination and polarity effects, which are to be determined experimentally as they depend on chamber design and radiation quality. Both effects were investigated in electron beams from a linear accelerator (Varian 21CD) for a set of four tissue equivalent plane parallel ionization chambers customized for the present research by Standard Imaging (Madison WI). All four chambers share the same design and air cavity dimensions, differing only in the diameter of their collecting electrode and the corresponding width of the guard ring. The diameters of the collecting electrodes were 2 mm, 4 mm, 10 mm and 20 mm. Measurements were taken using electron beams of nominal energy 6 to 20 MeV in a 10 cm x 10 cm field size with a SSD of 100 cm at various depths in a Solid Water slab phantom. No significant variation of recombination effect was found with radiation quality, depth of measurement or chamber design. However, the polarity effect exceeded 5% for the chambers with small collecting electrode for an effective electron energy below 4 MeV at the point of measurement. The magnitude of the effect increased with decreasing electron energy in the phantom. The polarity correction factor calculated following AAPM protocol TG51 ranged from approximately 1.00 for the 20.0 mm chamber to less than 0.95 for the 2 mm chamber at 4.1 cm depth in a electron beam of nominally 12 MeV. By inverting the chamber it could be shown that the polarity effect did not depend on the polarity of the electrode first traversed by the electron beam. Similarly, the introduction of an air gap between the overlying phantom layer and the chambers demonstrated that the angular distribution of the electrons at the point of measurement had a lesser effect on the polarity correction than the electron energy itself. The magnitude of the absolute difference between charge collected at positive and negative polarity was found to correlate with the area of the collecting electrode which is consistent with the explanation that differences in thickness of the collecting electrodes and the number of electrons stopped in them contribute significantly to the polarity effect. Overall, the polarity effects found in the present study would have a negligible effect on electron beam calibration at a measurement depth recommended by most calibration protocols. However, the present work tested the corrections under extreme conditions thereby aiming at greater understanding of the mechanism underlying the correction factors for these chambers. This may lead to better chamber design for absolute dosimetry and electron beam characterization with less reliance on empirical corrections.

  17. The correction of time and temperature effects in MR-based 3D Fricke xylenol orange dosimetry.

    PubMed

    Welch, Mattea L; Jaffray, David A

    2017-04-21

    Previously developed MR-based three-dimensional (3D) Fricke-xylenol orange (FXG) dosimeters can provide end-to-end quality assurance and validation protocols for pre-clinical radiation platforms. FXG dosimeters quantify ionizing irradiation induced oxidation of Fe 2+ ions using pre- and post-irradiation MR imaging methods that detect changes in spin-lattice relaxation rates (R 1   =  [Formula: see text]) caused by irradiation induced oxidation of Fe 2+ . Chemical changes in MR-based FXG dosimeters that occur over time and with changes in temperature can decrease dosimetric accuracy if they are not properly characterized and corrected. This paper describes the characterization, development and utilization of an empirical model-based correction algorithm for time and temperature effects in the context of a pre-clinical irradiator and a 7 T pre-clinical MR imaging system. Time and temperature dependent changes of R 1 values were characterized using variable TR spin-echo imaging. R 1 -time and R 1 -temperature dependencies were fit using non-linear least squares fitting methods. Models were validated using leave-one-out cross-validation and resampling. Subsequently, a correction algorithm was developed that employed the previously fit empirical models to predict and reduce baseline R 1 shifts that occurred in the presence of time and temperature changes. The correction algorithm was tested on R 1 -dose response curves and 3D dose distributions delivered using a small animal irradiator at 225 kVp. The correction algorithm reduced baseline R 1 shifts from  -2.8  ×  10 -2 s -1 to 1.5  ×  10 -3 s -1 . In terms of absolute dosimetric performance as assessed with traceable standards, the correction algorithm reduced dose discrepancies from approximately 3% to approximately 0.5% (2.90  ±  2.08% to 0.20  ±  0.07%, and 2.68  ±  1.84% to 0.46  ±  0.37% for the 10  ×  10 and 8  ×  12 mm 2 fields, respectively). Chemical changes in MR-based FXG dosimeters produce time and temperature dependent R 1 values for the time intervals and temperature changes found in a typical small animal imaging and irradiation laboratory setting. These changes cause baseline R 1 shifts that negatively affect dosimeter accuracy. Characterization, modeling and correction of these effects improved in-field reported dose accuracy to less than 1% when compared to standardized ion chamber measurements.

  18. Computer programs to predict induced effects of jets exhausting into a crossflow

    NASA Technical Reports Server (NTRS)

    Perkins, S. C., Jr.; Mendenhall, M. R.

    1984-01-01

    A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.

  19. A case of jaundice of obscure origin.

    PubMed

    Khan, Fahad M; Alcorn, Joseph; Hanson, Joshua

    2014-05-01

    Idiopathic painless jaundice with significant elevations in serum transaminases, occurring in a previously healthy patient, invokes a circumscribed set of possibilities including viral hepatitis, auto-immune hepatitis (AIH) and drug-induced liver injury (DILI). In this described case, common causes of cholestatic jaundice were considered including drug-induced liver injury, viral causes of hepatitis, and auto-immune antibodies. Biliary obstruction was excluded by appropriate imaging studies. Liver biopsy was obtained, though not definitive. After detailed investigation failed to reveal a cause of the jaundice, an empiric trial of steroids was initiated on the possibility that our patient had antibody-negative AIH and not DILI, with an associated grave prognosis. Empiric treatment with prednisone led to rapid resolution of jaundice and to the conclusion that the correct diagnosis was antibody-negative AIH.

  20. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  1. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  2. An Empirical Test of a Strategy for Training Examinees in the Use of Partial Information in Taking Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Bliss, Leonard B.

    The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…

  3. Dynamics of Markets

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2004-06-01

    Standard texts and research in economics and finance ignore the absence of evidence from the analysis of real, unmassaged market data to support the notion of Adam Smith's stabilizing Invisible Hand. In stark contrast, this text introduces a new empirically-based model of financial market dynamics that explains the volatility of prices options correctly and clarifies the instability of financial markets. The emphasis is on understanding how real markets behave, not how they hypothetically 'should' behave.

  4. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1988-08-31

    refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct

  5. Rate equation model of laser induced bias in uranium isotope ratios measured by resonance ionization mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.

    2016-01-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the U-235/U-238 ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the U-235/U-238 ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. Development of this model has highlighted several important considerations for properly interpreting experimental results.« less

  6. Comparison of high intensity focused ultrasound (HIFU) exposures using empirical and backscatter attenuation estimation methods

    NASA Astrophysics Data System (ADS)

    Civale, John; Ter Haar, Gail; Rivens, Ian; Bamber, Jeff

    2005-09-01

    Currently, the intensity to be used in our clinical HIFU treatments is calculated from the acoustic path lengths in different tissues measured on diagnostic ultrasound images of the patient in the treatment position, and published values of ultrasound attenuation coefficients. This yields an approximate value for the acoustic power at the transducer required to give a stipulated focal intensity in situ. Estimation methods for the actual acoustic attenuation have been investigated in large parts of the tissue path overlying the target volume from the backscattered ultrasound signal for each patient (backscatter attenuation estimation: BAE). Several methods have been investigated. The backscattered echo information acquired from an Acuson scanner has been used to compute the diffraction-corrected attenuation coefficient at each frequency using two methods: a substitution method and an inverse diffraction filtering process. A homogeneous sponge phantom was used to validate the techniques. The use of BAE to determine the correct HIFU exposure parameters for lesioning has been tested in ex vivo liver. HIFU lesions created with a 1.7-MHz therapy transducer have been studied using a semiautomated image processing technique. The reproducibility of lesion size for given in situ intensities determined using BAE and empirical techniques has been compared.

  7. Rate equation model of laser induced bias in uranium isotope ratios measured by resonance ionization mass spectrometry

    DOE PAGES

    Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.; ...

    2015-12-07

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the 235U/238U ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the 235U/ 238U ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. As a result, development of this model has highlighted several important considerations for properly interpreting experimental results.« less

  8. Radar attenuation and temperature within the Greenland Ice Sheet

    USGS Publications Warehouse

    MacGregor, Joseph A; Li, Jilu; Paden, John D; Catania, Ginny A; Clow, Gary D.; Fahnestock, Mark A; Gogineni, Prasad S.; Grimm, Robert E.; Morlighem, Mathieu; Nandi, Soumyaroop; Seroussi, Helene; Stillman, David E

    2015-01-01

    The flow of ice is temperature-dependent, but direct measurements of englacial temperature are sparse. The dielectric attenuation of radio waves through ice is also temperature-dependent, and radar sounding of ice sheets is sensitive to this attenuation. Here we estimate depth-averaged radar-attenuation rates within the Greenland Ice Sheet from airborne radar-sounding data and its associated radiostratigraphy. Using existing empirical relationships between temperature, chemistry, and radar attenuation, we then infer the depth-averaged englacial temperature. The dated radiostratigraphy permits a correction for the confounding effect of spatially varying ice chemistry. Where radar transects intersect boreholes, radar-inferred temperature is consistently higher than that measured directly. We attribute this discrepancy to the poorly recognized frequency dependence of the radar-attenuation rate and correct for this effect empirically, resulting in a robust relationship between radar-inferred and borehole-measured depth-averaged temperature. Radar-inferred englacial temperature is often lower than modern surface temperature and that of a steady state ice-sheet model, particularly in southern Greenland. This pattern suggests that past changes in surface boundary conditions (temperature and accumulation rate) affect the ice sheet's present temperature structure over a much larger area than previously recognized. This radar-inferred temperature structure provides a new constraint for thermomechanical models of the Greenland Ice Sheet.

  9. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.

  10. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  11. Interpreting Methanol v(sub 2)-Band Emission in Comets Using Empirical Fluorescence g-Factors

    NASA Technical Reports Server (NTRS)

    DiSanti, Michael; Villanueva, G. L.; Bonev, B. P.; Mumma, M. J.; Paganini, L.; Gibb, E. L.; Magee-Sauer, K.

    2011-01-01

    For many years we have been developing the ability, through high-resolution spectroscopy targeting ro-vibrational emission in the approximately 3 - 5 micrometer region, to quantify a suite of (approximately 10) parent volatiles in comets using quantum mechanical fluorescence models. Our efforts are ongoing and our latest includes methanol (CH3OH). This is unique among traditionally targeted species in having lacked sufficiently robust models for its symmetric (v(sub 3) band) and asymmetric (v(sub 2) and v(sub 9) bands) C-H3 stretching modes, required to provide accurate predicted intensities for individual spectral lines and hence rotational temperatures and production rates. This has provided the driver for undertaking a detailed empirical study of line intensities, and has led to substantial progress regarding our ability to interpret CH3OH in comets. The present study concentrates on the spectral region from approximately 2970 - 3010 per centimeter (3.367 - 3.322 micrometer), which is dominated by emission in the (v(sub 7) band of C2H6 and the v(sub 2) band of CH3OH, with minor contributions from CH3OH (v(sub 9) band), CH4 (v(sub 3)), and OH prompt emissions (v(sub 1) and v(sub 2)- v(sub 1)). Based on laboratory jet-cooled spectra (at a rotational temperature near 20 K)[1], we incorporated approximately 100 lines of the CH3OH v(sub 2) band, having known frequencies and lower state rotational energies, into our model. Line intensities were determined through comparison with several comets we observed with NIRSPEC at Keck 2, after removal of continuum and additional molecular emissions and correcting for atmospheric extinction. In addition to the above spectral region, NIRSPEC allows simultaneous sampling of the CH3OH v(sub 3) band (centered at 2844 per centimeter, or 3.516 micrometers and several hot bands of H2O in the approximately 2.85 - 2.9 micrometer region, at a nominal spectral resolving power of approximately 25,000 [2]. Empirical g-factors for v(sub 2) lines were based on the production rate as determined from the v(sub 3) Q-branch intensity; application to comets spanning a range of rotational temperatures (approximately 50 - 90 K) will be reported. This work represents an extension of that presented for comet 21P/Giacobini-Zinner at the 2010 Division for Planetary Sciences meeting [3]. Our empirical study also allows for quantifying CH3OH in comets using IR spectrometers for which the v(sub 3) and v(sub 2) bands are not sampled simultaneously, for example CSHELL/NASA IRTF or CRIRES/VLT.

  12. Energy risk in the arbitrage pricing model: an empirical and theoretical study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, M.A.

    1986-01-01

    This dissertation empirically explores the Arbitrage Pricing Theory in the context of energy risk for securities over the 1960s, 1970s, and early 1980s. Starting from a general multifactor pricing model, the paper develops a two factor model based on a market-like factor and an energy factor. This model is then tested on portfolios of securities grouped according to industrial classification using several econometric techniques designed to overcome some of the more serious estimation problems common to these models. The paper concludes that energy risk is priced in the 1970s and possibly even in the 1960s. Energy risk is found tomore » be priced in the sense that investors who hold assets subjected to energy risk are paid for this risk. The classic version of the Capital Asset Pricing Model which posits the market as the single priced factor is rejected in favor of the Arbitrage Pricing Theory or multi-beta versions of the Capital Asset Pricing Model. The study introduces some original econometric methodology to carry out empirical tests.« less

  13. Young age as a modifying factor in sports concussion management: what is the evidence?

    PubMed

    Foley, Cassidy; Gregory, Andrew; Solomon, Gary

    2014-01-01

    In 2008, the Concussion in Sport Group (CISG) published its third consensus statement and introduced 10 'modifying' factors that were presumed clinically to influence the investigation and management of concussions in sports. Young age was listed as one of the modifying factors. In some cases, these modifiers were thought to be predictive of prolonged or persistent symptoms. These same modifying factors were retained in the fourth iteration of the CISG consensus statement (2013), although mention was made of possible limitations of their efficacy. The CISG statements provided several empirical references regarding young age as a modifying factor. We reviewed the published sports concussion literature with the purpose of determining empirical studies that support or refute the inclusion of young age as a modifier of concussive injury in sports. We performed a systematic review of the PubMed database utilizing the keywords concussion, sports, mild traumatic brain injury, youth, adolescents, and children. English language studies were extracted by the authors and summarized for review. Multiple empirical studies were found indicating that younger athletes may take longer to recover from a sports-related concussion (SRC) than their older peers. However, studies did not indicate that younger athletes were at more risk for prolonged recovery (>4 wk). Empirical evidence supports the inclusion of young age as a modifying factor in sports concussion. However, the difference in recovery time seems relatively small (a few days) and young age does not predict prolonged recovery (>4 wk). The findings support the inclusion of young age as a specific modifier in the treatment of SRC and have implications for the clinical management of this common injury.

  14. Characterization of truck traffic in Michigan for the new mechanistic empirical pavement design guide.

    DOT National Transportation Integrated Search

    2009-12-01

    The purpose of this study is to characterize traffic inputs in support of the new Mechanistic- : Empirical Pavement Design Guide (M-E PDG) for the state of Michigan. These traffic : characteristics include monthly distribution factors (MDF), hourly d...

  15. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  16. The Interest Checklist: a factor analysis.

    PubMed

    Klyczek, J P; Bauer-Yox, N; Fiedler, R C

    1997-01-01

    The purpose of this study was to determine whether the 80 items on the Interest Checklist empirically cluster into the five categories of interests described by Matsutsuyu, the developer of the tool. The Interest Checklist was administered to 367 subjects classified in three subgroups: students, working adults, and retired elderly persons. An 80-item correlation matrix was formed from the responses to the Interest Checklist for each subgroup and then used in a factor analysis model to identify the underlying structure or domains of interest. Results indicated that the Social Recreation theoretical category was empirically independent for all three subgroups; the Physical Sports and Cultural/Educational theoretical categories were empirically independent for only the college students and working adults; and the Manual Skills theoretical category was empirically independent for only the working adults. Although therapists should continue to be cautious in their interpretation of patients' Interest Checklist scores, the tool is useful for identifying patients' interests in order to choose meaningful activities for therapy.

  17. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    NASA Astrophysics Data System (ADS)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  18. Effect on Gaseous Film Cooling of Coolant Injection Through Angled Slots and Normal Holes

    NASA Technical Reports Server (NTRS)

    Papell, S. Stephen

    1960-01-01

    A study was made to determine the effect of coolant injection angularity on gaseous film-cooling effectiveness. In the correlation of experimental data an effective injection angle was defined by a vector summation of the coolant and mainstream gas flows. The cosine of this angle was used as a parameter to empirically develop a corrective term to qualify a correlating equation presented in Technical Note D-130 that was limited to tangential injection of the coolant. Data were also obtained for coolant injection through rows of holes normal to the test plate. The slot correlating equation was adapted to fit these data by the definition of an effective slot height. An additional corrective term was then determined to correlate these data.

  19. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE PAGES

    Thieberger, Peter; Gassner, D.; Hulsart, R.; ...

    2018-04-25

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  20. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thieberger, Peter; Gassner, D.; Hulsart, R.

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

Top