Sample records for model correction factor

  1. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  2. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2018-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  3. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  5. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

  6. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  7. Markov model of the loan portfolio dynamics considering influence of management and external economic factors

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana; Timofeeva, Galina

    2016-12-01

    Mathematical model of loan portfolio in the form of a controlled Markov chain with discrete time is considered. It is assumed that coefficients of migration matrix depend on corrective actions and external factors. Corrective actions include process of receiving applications, interaction with existing solvent and insolvent clients. External factors are macroeconomic indicators, such as inflation and unemployment rates, exchange rates, consumer price indices, etc. Changes in corrective actions adjust the intensity of transitions in the migration matrix. The mathematical model for forecasting the credit portfolio structure taking into account a cumulative impact of internal and external changes is obtained.

  8. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  9. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  10. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  11. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  12. Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.

    PubMed

    Webster, Eva M; Ellis, David A

    2012-09-01

    The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.

  13. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  14. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  15. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  16. New Correction Factors Based on Seasonal Variability of Outdoor Temperature for Estimating Annual Radon Concentrations in UK.

    PubMed

    Daraktchieva, Z

    2017-06-01

    Indoor radon concentrations generally vary with season. Radon gas enters buildings from beneath due to a small air pressure difference between the inside of a house and outdoors. This underpressure which draws soil gas including radon into the house depends on the difference between the indoor and outdoor temperatures. The variation in a typical house in UK showed that the mean indoor radon concentration reaches a maximum in January and a minimum in July. Sine functions were used to model the indoor radon data and monthly average outdoor temperatures, covering the period between 2005 and 2014. The analysis showed a strong negative correlation between the modelled indoor radon data and outdoor temperature. This correlation was used to calculate new correction factors that could be used for estimation of annual radon concentration in UK homes. The comparison between the results obtained with the new correction factors and the previously published correction factors showed that the new correction factors perform consistently better on the selected data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Wall interference correction improvements for the ONERA main wind tunnels

    NASA Technical Reports Server (NTRS)

    Vaucheret, X.

    1982-01-01

    This paper describes improved methods of calculating wall interference corrections for the ONERA large windtunnels. The mathematical description of the model and its sting support have become more sophisticated. An increasing number of singularities is used until an agreement between theoretical and experimental signatures of the model and sting on the walls of the closed test section is obtained. The singularity decentering effects are calculated when the model reaches large angles of attack. The porosity factor cartography on the perforated walls deduced from the measured signatures now replaces the reference tests previously carried out in larger tunnels. The porosity factors obtained from the blockage terms (signatures at zero lift) and from the lift terms are in good agreement. In each case (model + sting + test section), wall corrections are now determined, before the tests, as a function of the fundamental parameters M, CS, CZ. During the windtunnel tests, the corrections are quickly computed from these functions.

  18. Coordinate Conversion Technique for OTH Backscatter Radar

    DTIC Science & Technology

    1977-05-01

    obliquity of the earth’s equator (=23.0󈧓), A is the mean longitude of the sun measured in the ecliptic counterclockwise from the first point of...MODEL FOR Fo-LAYER CORRECTION FACTORS-VERTICAL IO NO GRAM 11. MODEL FOR Fg-LAYER CORRECTION FACTORS- OBLIQUE IO NO GRAM 12. ELEMENTS OF COMMON BLOCK...simulation in (1) to a given oblique ionogram generate range gradient factors to apply to f F9 and I\\1(3000)F„ to force agreement; (3) from the

  19. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  20. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  1. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  2. Development of a Pressure Sensitive Paint System with Correction for Temperature Variation

    NASA Technical Reports Server (NTRS)

    Simmons, Kantis A.

    1995-01-01

    Pressure Sensitive Paint (PSP) is known to provide a global image of pressure over a model surface. However, improvements in its accuracy and reliability are needed. Several factors contribute to the inaccuracy of PSP. One major factor is that luminescence is temperature dependent. To correct the luminescence of the pressure sensing component for changes in temperature, a temperature sensitive luminophore incorporated in the paint allows the user to measure both pressure and temperature simultaneously on the surface of a model. Magnesium Octaethylporphine (MgOEP) was used as a temperature sensing luminophore, with the pressure sensing luminophore, Platinum Octaethylporphine (PtOEP), to correct for temperature variations in model surface pressure measurements.

  3. Analysis of diffuse radiation data for Beer Sheva: Measured (shadow ring) versus calculated (global-horizontal beam) values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudish, A.I.; Ianetz, A.

    1993-12-01

    The authors have utilized concurrently measured global, normal incidence beam, and diffuse radiation data, the latter measured by means of a shadow ring pyranometer to study the relative magnitude of the anisotropic contribution (circumsolar region and nonuniform sky conditions) to the diffuse radiation. In the case of Beer Sheva, the monthly average hourly anisotropic correction factor varies from 2.9 to 20.9%, whereas the [open quotes]standard[close quotes] geometric correction factor varies from 5.6 to 14.0%. The monthly average hourly overall correction factor (combined anisotropic and geometric factors) varies from 8.9 to 37.7%. The data have also been analyzed using a simplemore » model of sky radiance developed by Steven in 1984. His anisotropic correction factor is a function of the relative strength and angular width of the circumsolar radiation region. The results of this analysis are in agreement with those previously reported for Quidron on the Dead Sea, viz. the anisotropy and relative strength of the circumsolar radiation are significantly greater than at any of the sites analyzed by Steven. In addition, the data have been utilized to validate a model developed by LeBaron et al. in 1990 for correcting shadow ring diffuse radiation data. The monthly average deviation between the corrected and true diffuse radiation values varies from 4.55 to 7.92%.« less

  4. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  5. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  6. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  7. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  8. Delay correction model for estimating bus emissions at signalized intersections based on vehicle specific power distributions.

    PubMed

    Song, Guohua; Zhou, Xixi; Yu, Lei

    2015-05-01

    The intersection is one of the biggest emission points for buses and also the high exposure site for people. Several traffic performance indexes have been developed and widely used for intersection evaluations. However, few studies have focused on the relationship between these indexes and emissions at intersections. This paper intends to propose a model that relates emissions to the two commonly used measures of effectiveness (i.e. delay time and number of stops) by using bus activity data and emission data at intersections. First, with a large number of field instantaneous emission data and corresponding activity data collected by the Portable Emission Measurement System (PEMS), emission rates are derived for different vehicle specific power (VSP) bins. Then, 2002 sets of trajectory data, an equivalent of about 140,000 sets of second-by-second activity data, are obtained from Global Position Systems (GPSs)-equipped diesel buses in Beijing. The delay and the emission factors of each trajectory are estimated. Then, by using baseline emission factors for two types of intersections, e.g. the Arterial @ Arterial Intersection and the Arterial @ Collector, delay correction factors are calculated for the two types of intersections at different congestion levels. Finally, delay correction models are established for adjusting emission factors for each type of intersections and different numbers of stops. A comparative analysis between estimated and field emission factors demonstrates that the delay correction model is reliable. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  10. Adjustment of spatio-temporal precipitation patterns in a high Alpine environment

    NASA Astrophysics Data System (ADS)

    Herrnegger, Mathew; Senoner, Tobias; Nachtnebel, Hans-Peter

    2018-01-01

    This contribution presents a method for correcting the spatial and temporal distribution of precipitation fields in a mountainous environment. The approach is applied within a flood forecasting model in the Upper Enns catchment in the Central Austrian Alps. Precipitation exhibits a large spatio-temporal variability in Alpine areas. Additionally the density of the monitoring network is low and measurements are subjected to major errors. This can lead to significant deficits in water balance estimation and stream flow simulations, e.g. for flood forecasting models. Therefore precipitation correction factors are frequently applied. For the presented study a multiplicative, stepwise linear correction model is implemented in the rainfall-runoff model COSERO to adjust the precipitation pattern as a function of elevation. To account for the local meteorological conditions, the correction model is derived for two elevation zones: (1) Valley floors to 2000 m a.s.l. and (2) above 2000 m a.s.l. to mountain peaks. Measurement errors also depend on the precipitation type, with higher magnitudes in winter months during snow fall. Therefore, additionally, separate correction factors for winter and summer months are estimated. Significant improvements in the runoff simulations could be achieved, not only in the long-term water balance simulation and the overall model performance, but also in the simulation of flood peaks.

  11. Refraction error correction for deformation measurement by digital image correlation at elevated temperature

    NASA Astrophysics Data System (ADS)

    Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji

    2017-03-01

    An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.

  12. A measurement theory of illusory conjunctions.

    PubMed

    Prinzmetal, William; Ivry, Richard B; Beck, Diane; Shimizu, Naomi

    2002-04-01

    Illusory conjunctions refer to the incorrect perceptual combination of correctly perceived features, such as color and shape. Research on the phenomenon has been hampered by the lack of a measurement theory that accounts for guessing features, as well as the incorrect combination of correctly perceived features. Recently, several investigators have suggested using multinomial models as a tool for measuring feature integration. The authors examined the adequacy of these models in 2 experiments by testing whether model parameters reflect changes in stimulus factors. In a third experiment, confidence ratings were used as a tool for testing the model. Multinomial models accurately reflected both variations in stimulus factors and observers' trial-by-trial confidence ratings.

  13. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  14. Improving Hybrid III injury assessment in steering wheel rim to chest impacts using responses from finite element Hybrid III and human body model.

    PubMed

    Holmqvist, Kristian; Davidsson, Johan; Mendoza-Vazquez, Manuel; Rundberget, Peter; Svensson, Mats Y; Thorn, Stefan; Törnvall, Fredrik

    2014-01-01

    The main aim of this study was to improve the quality of injury risk assessments in steering wheel rim to chest impacts when using the Hybrid III crash test dummy in frontal heavy goods vehicle (HGV) collision tests. Correction factors for chest injury criteria were calculated as the model chest injury parameter ratios between finite element (FE) Hybrid III, evaluated in relevant load cases, and the Total Human Model for Safety (THUMS). This is proposed to be used to compensate Hybrid III measurements in crash tests where steering wheel rim to chest impacts occur. The study was conducted in an FE environment using an FE-Hybrid III model and the THUMS. Two impactor shapes were used, a circular hub and a long, thin horizontal bar. Chest impacts at velocities ranging from 3.0 to 6.0 m/s were simulated at 3 impact height levels. A ratio between FE-Hybrid III and THUMS chest injury parameters, maximum chest compression C max, and maximum viscous criterion VC max, were calculated for the different chest impact conditions to form a set of correction factors. The definition of the correction factor is based on the assumption that the response from a circular hub impact to the middle of the chest is well characterized and that injury risk measures are independent of impact height. The current limits for these chest injury criteria were used as a basis to develop correction factors that compensate for the limitations in biofidelity of the Hybrid III in steering wheel rim to chest impacts. The hub and bar impactors produced considerably higher C max and VC max responses in the THUMS compared to the FE-Hybrid III. The correction factor for the responses of the FE-Hybrid III showed that the criteria responses for the bar impactor were consistently overestimated. Ratios based on Hybrid III and THUMS responses provided correction factors for the Hybrid III responses ranging from 0.84 to 0.93. These factors can be used to estimate C max and VC max values when the Hybrid III is used in crash tests for which steering wheel rim to chest interaction occurs. For the FE-Hybrid III, bar impacts caused higher chest deflection compared to hub impacts, although the contrary results were obtained with the more humanlike THUMS. Correction factors were developed that can be used to correct the Hybrid III chest responses. Higher injury criteria capping limits for steering wheel impacts are acceptable. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.

  15. SU-F-T-367: Using PRIMO, a PENELOPE-Based Software, to Improve the Small Field Dosimetry of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benmakhlouf, H; Andreo, P; Brualla, L

    2016-06-15

    Purpose: To calculate output correction factors for Varian Clinac 2100iX beams for seven small field detectors and use the values to determine the small field output factors for the linacs at Karolinska university hospital. Methods: Phase space files (psf) for square fields between 0.25cm and 10cm were calculated using the PENELOPE-based PRIMO software. The linac MC-model was tuned by comparing PRIMO-estimated and experimentally determined depth doses and lateral dose-profiles for 40cmx40cm fields. The calculated psf were used as radiation sources to calculate the correction factors of IBA and PTW detectors with the code penEasy/PENELOPE. Results: The optimal tuning parameters ofmore » the MClinac model in PRIMO were 5.4 MeV incident electron energy and zero energy spread, focal spot size and beam divergence. Correction factors obtained for the liquid ion chamber (PTW-T31018) are within 1% down to 0.5 cm fields. For unshielded diodes (IBA-EFD, IBA-SFD, PTW-T60017 and PTW-T60018) the corrections are up to 2% at intermediate fields (>1cm side), becoming down to −11% for fields smaller than 1cm. The shielded diode (IBA-PFD and PTW-T60016) corrections vary with field size from 0 to −4%. Volume averaging effects are found for most detectors in the presence of 0.25cm fields. Conclusion: Good agreement was found between correction factors based on PRIMO-generated psf and those from other publications. The calculated factors will be implemented in output factor measurements (using several detectors) in the clinic. PRIMO is a userfriendly general code capable of generating small field psf and can be used without having to code own linac geometries. It can therefore be used to improve the clinical dosimetry, especially in the commissioning of linear accelerators. Important dosimetry data, such as dose-profiles and output factors can be determined more accurately for a specific machine, geometry and setup by using PRIMO and having a MC-model of the detector used.« less

  16. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  17. Regression-Based Norms for a Bi-factor Model for Scoring the Brief Test of Adult Cognition by Telephone (BTACT).

    PubMed

    Gurnani, Ashita S; John, Samantha E; Gavett, Brandon E

    2015-05-01

    The current study developed regression-based normative adjustments for a bi-factor model of the The Brief Test of Adult Cognition by Telephone (BTACT). Archival data from the Midlife Development in the United States-II Cognitive Project were used to develop eight separate linear regression models that predicted bi-factor BTACT scores, accounting for age, education, gender, and occupation-alone and in various combinations. All regression models provided statistically significant fit to the data. A three-predictor regression model fit best and accounted for 32.8% of the variance in the global bi-factor BTACT score. The fit of the regression models was not improved by gender. Eight different regression models are presented to allow the user flexibility in applying demographic corrections to the bi-factor BTACT scores. Occupation corrections, while not widely used, may provide useful demographic adjustments for adult populations or for those individuals who have attained an occupational status not commensurate with expected educational attainment. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  19. Scoring correction for MMPI-2 Hs scale with patients experiencing a traumatic brain injury: a test of measurement invariance.

    PubMed

    Alkemade, Nathan; Bowden, Stephen C; Salzman, Louis

    2015-02-01

    It has been suggested that MMPI-2 scoring requires removal of some items when assessing patients after a traumatic brain injury (TBI). Gass (1991. MMPI-2 interpretation and closed head injury: A correction factor. Psychological assessment, 3, 27-31) proposed a correction procedure in line with the hypothesis that MMPI-2 endorsement may be affected by symptoms of TBI. This study assessed the validity of the Gass correction procedure. A sample of patients with a TBI (n = 242), and a random subset of the MMPI-2 normative sample (n = 1,786). The correction procedure implies a failure of measurement invariance across populations. This study examined measurement invariance of one of the MMPI-2 scales (Hs) that includes TBI correction items. A four-factor model of the MMPI-2 Hs items was defined. The factor model was found to meet the criteria for partial measurement invariance. Analysis of the change in sensitivity and specificity values implied by partial measurement invariance failed to indicate significant practical impact of partial invariance. Overall, the results support continued use of all Hs items to assess psychological well-being in patients with TBI. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  1. Directivity in NGA earthquake ground motions: Analysis using isochrone theory

    USGS Publications Warehouse

    Spudich, P.; Chiou, B.S.J.

    2008-01-01

    We present correction factors that may be applied to the ground motion prediction relations of Abrahamson and Silva, Boore and Atkinson, Campbell and Bozorgnia, and Chiou and Youngs (all in this volume) to model the azimuthally varying distribution of the GMRotI50 component of ground motion (commonly called 'directivity') around earthquakes. Our correction factors may be used for planar or nonplanar faults having any dip or slip rake (faulting mechanism). Our correction factors predict directivity-induced variations of spectral acceleration that are roughly half of the strike-slip variations predicted by Somerville et al. (1997), and use of our factors reduces record-to-record sigma by about 2-20% at 5 sec or greater period. ?? 2008, Earthquake Engineering Research Institute.

  2. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    NASA Astrophysics Data System (ADS)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.

  3. Reliability of IGBT in a STATCOM for Harmonic Compensation and Power Factor Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak

    With smart grid integration, there is a need to characterize reliability of a power system by including reliability of power semiconductors in grid related applications. In this paper, the reliability of IGBTs in a STATCOM application is presented for two different applications, power factor correction and harmonic elimination. The STATCOM model is developed in EMTP, and analytical equations for average conduction losses in an IGBT and a diode are derived and compared with experimental data. A commonly used reliability model is used to predict reliability of IGBT.

  4. Properties of Vector Preisach Models

    NASA Technical Reports Server (NTRS)

    Kahler, Gary R.; Patel, Umesh D.; Torre, Edward Della

    2004-01-01

    This paper discusses rotational anisotropy and rotational accommodation of magnetic particle tape. These effects have a performance impact during the reading and writing of the recording process. We introduce the reduced vector model as the basis for the computations. Rotational magnetization models must accurately compute the anisotropic characteristics of ellipsoidally magnetizable media. An ellipticity factor is derived for these media that computes the two-dimensional magnetization trajectory for all applied fields. An orientation correction must be applied to the computed rotational magnetization. For isotropic materials, an orientation correction has been developed and presented. For anisotropic materials, an orientation correction is introduced.

  5. An experimental comparison of ETM+ image geometric correction methods in the mountainous areas of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Jinliang; Wu, Xuejiao

    2010-11-01

    Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.

  6. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  7. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  8. Revisit to the RXTE and ASCA Data for GRO J1655-40: Effects of Radiative Transfer in Corona and Color Hardening in the Disk

    NASA Technical Reports Server (NTRS)

    Zhang, S. Nan; Zhang, Xiaoling; Wu, Xuebing; Yao, Yangsen; Sun, Xuejun; Xu, Haiguang; Cui, Wei; Chen, Wan; Harmon, B. A.; Robinson, C. R.

    1999-01-01

    The results of spectral modeling of the data for a series of RXTE observations and four ASCA observations of GRO J1655-40 are presented. The thermal Comptonization model is used instead of the power-law model for the hard component of the two-component continuum spectra. The previously reported dramatic variations of the apparent inner disk radius of GRO J1655-40 during its outburst may be due to the inverse Compton scattering in the hot corona. A procedure is developed for making the radiative transfer correction to the fitting parameters from RXTE data and a more stable inner disk radius is obtained. A practical process of determining the color correction (hardening) factor from observational data is proposed and applied to the four ASCA observations of GRO J1655-40. We found that the color correction factor may vary significantly between different observations and the finally corrected physical inner disk radius remains reasonably stable over a large range of luminosity and spectral states.

  9. Software tool for portal dosimetry research.

    PubMed

    Vial, P; Hunt, P; Greer, P B; Oliver, L; Baldock, C

    2008-09-01

    This paper describes a software tool developed for research into the use of an electronic portal imaging device (EPID) to verify dose for intensity modulated radiation therapy (IMRT) beams. A portal dose image prediction (PDIP) model that predicts the EPID response to IMRT beams has been implemented into a commercially available treatment planning system (TPS). The software tool described in this work was developed to modify the TPS PDIP model by incorporating correction factors into the predicted EPID image to account for the difference in EPID response to open beam radiation and multileaf collimator (MLC) transmitted radiation. The processes performed by the software tool include; i) read the MLC file and the PDIP from the TPS, ii) calculate the fraction of beam-on time that each point in the IMRT beam is shielded by MLC leaves, iii) interpolate correction factors from look-up tables, iv) create a corrected PDIP image from the product of the original PDIP and the correction factors and write the corrected image to file, v) display, analyse, and export various image datasets. The software tool was developed using the Microsoft Visual Studio.NET framework with the C# compiler. The operation of the software tool was validated. This software provided useful tools for EPID dosimetry research, and it is being utilised and further developed in ongoing EPID dosimetry and IMRT dosimetry projects.

  10. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  11. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  12. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  13. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  14. Ionization correction factors for H II regions in blue compact dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Holovatyi, V. V.; Melekh, B. Ya.

    2002-08-01

    Energy distributions in the spectra of the ionizing nuclei of H II regions beyond λ <= 91.2 nm were calculated. A grid of photoionization models of 270 H II regions was constructed. The free parameters of the model grid are the hydrogen density nH in the nebular gas, filling factor, energy Lc-spectrum of ionizing nuclei, and metallicity. The chemical composition from the studies of Izotov et al. were used for model grid initialization. The integral linear spectra calculated for the photoionization models were used to determine the concentration ne, temperatures Te of electrons, and ionic concentrations n(A+i)/n(H+) by the nebular gas diagnostic method. The averaged relative ionic abundances n(A+i)/n(H+) thus calculated were used to determine new expressions for ionization correction factors which we recommend for the determination of abundances in the H II regions of blue compact dwarf galaxies.

  15. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  16. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE PAGES

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...

    2018-03-03

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  17. The Etiology of Presbyopia, Contributing Factors, and Future Correction Methods

    NASA Astrophysics Data System (ADS)

    Hickenbotham, Adam Lyle

    Presbyopia has been a complicated problem for clinicians and researchers for centuries. Defining what constitutes presbyopia and what are its primary causes has long been a struggle for the vision and scientific community. Although presbyopia is a normal aging process of the eye, the continuous and gradual loss of accommodation is often dreaded and feared. If presbyopia were to be considered a disease, its global burden would be enormous as it affects more than a billion people worldwide. In this dissertation, I explore factors associated with presbyopia and develop a model for explaining the onset of presbyopia. In this model, the onset of presbyopia is associated primarily with three factors; depth of focus, focusing ability (accommodation), and habitual reading (or task) distance. If any of these three factors could be altered sufficiently, the onset of presbyopia could be delayed or prevented. Based on this model, I then examine possible optical methods that would be effective in correcting for presbyopia by expanding depth of focus. Two methods that have been show to be effective at expanding depth of focus include utilizing a small pupil aperture or generating higher order aberrations, particularly spherical aberration. I compare these two optical methods through the use of simulated designs, monitor testing, and visual performance metrics and then apply them in subjects through an adaptive optics system that corrects aberrations through a wavefront aberrometer and deformable mirror. I then summarize my findings and speculate about the future of presbyopia correction.

  18. The two sides of the C-factor.

    PubMed

    Fok, Alex S L; Aregawi, Wondwosen A

    2018-04-01

    The aim of this paper is to investigate the effects on shrinkage strain/stress development of the lateral constraints at the bonded surfaces of resin composite specimens used in laboratory measurement. Using three-dimensional (3D) Hooke's law, a recently developed shrinkage stress theory is extended to 3D to include the additional out-of-plane strain/stress induced by the lateral constraints at the bonded surfaces through the Poisson's ratio effect. The model contains a parameter that defines the relative thickness of the boundary layers, adjacent to the bonded surfaces, that are under such multiaxial stresses. The resulting differential equation is solved for the shrinkage stress under different boundary conditions. The accuracy of the model is assessed by comparing the numerical solutions with a wide range of experimental data, which include those from both shrinkage strain and shrinkage stress measurements. There is good agreement between theory and experiments. The model correctly predicts the different instrument-dependent effects that a specimen's configuration factor (C-factor) has on shrinkage stress. That is, for noncompliant stress-measuring instruments, shrinkage stress increases with the C-factor of the cylindrical specimen; while the opposite is true for compliant instruments. The model also provides a correction factor, which is a function of the C-factor, Poisson's ratio and boundary layer thickness of the specimen, for shrinkage strain measured using the bonded-disc method. For the resin composite examined, the boundary layers have a combined thickness that is ∼11.5% of the specimen's diameter. The theory provides a physical and mechanical basis for the C-factor using principles of engineering mechanics. The correction factor it provides allows the linear shrinkage strain of a resin composite to be obtained more accurately from the bonded-disc method. Published by Elsevier Ltd.

  19. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  20. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  1. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Calculation of Coincidence Summing Correction Factors for an HPGe detector using GEANT4.

    PubMed

    Giubrone, G; Ortiz, J; Gallardo, S; Martorell, S; Bas, M C

    2016-07-01

    The aim of this paper was to calculate the True Coincidence Summing Correction Factors (TSCFs) for an HPGe coaxial detector in order to correct the summing effect as a result of the presence of (88)Y and (60)Co in a multigamma source used to obtain a calibration efficiency curve. Results were obtained for three volumetric sources using the Monte Carlo toolkit, GEANT4. The first part of this paper deals with modeling the detector in order to obtain a simulated full energy peak efficiency curve. A quantitative comparison between the measured and simulated values was made across the entire energy range under study. The True Summing Correction Factors were calculated for (88)Y and (60)Co using the full peak efficiencies obtained with GEANT4. This methodology was subsequently applied to (134)Cs, and presented a complex decay scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Correction of the spectral calibration of the Joint European Torus core light detecting and ranging Thomson scattering diagnostic using ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawke, J.; Scannell, R.; Maslov, M.

    2013-10-15

    This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less

  4. Scaling methane oxidation: From laboratory incubation experiments to landfill cover field conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abichou, Tarek, E-mail: abichou@eng.fsu.edu; Mahieu, Koenraad; Chanton, Jeff

    2011-05-15

    Evaluating field-scale methane oxidation in landfill cover soils using numerical models is gaining interest in the solid waste industry as research has made it clear that methane oxidation in the field is a complex function of climatic conditions, soil type, cover design, and incoming flux of landfill gas from the waste mass. Numerical models can account for these parameters as they change with time and space under field conditions. In this study, we developed temperature, and water content correction factors for methane oxidation parameters. We also introduced a possible correction to account for the different soil structure under field conditions.more » These parameters were defined in laboratory incubation experiments performed on homogenized soil specimens and were used to predict the actual methane oxidation rates to be expected under field conditions. Water content and temperature corrections factors were obtained for the methane oxidation rate parameter to be used when modeling methane oxidation in the field. To predict in situ measured rates of methane with the model it was necessary to set the half saturation constant of methane and oxygen, K{sub m}, to 5%, approximately five times larger than laboratory measured values. We hypothesize that this discrepancy reflects differences in soil structure between homogenized soil conditions in the lab and actual aggregated soil structure in the field. When all of these correction factors were re-introduced into the oxidation module of our model, it was able to reproduce surface emissions (as measured by static flux chambers) and percent oxidation (as measured by stable isotope techniques) within the range measured in the field.« less

  5. Method of Calculating the Correction Factors for Cable Dimensioning in Smart Grids

    NASA Astrophysics Data System (ADS)

    Simutkin, M.; Tuzikova, V.; Tlusty, J.; Tulsky, V.; Muller, Z.

    2017-04-01

    One of the main causes of overloading electrical equipment by currents of higher harmonics is the great increasing of a number of non-linear electricity power consumers. Non-sinusoidal voltages and currents affect the operation of electrical equipment, reducing its lifetime, increases the voltage and power losses in the network, reducing its capacity. There are standards that respects emissions amount of higher harmonics current that cannot provide interference limit for a safe level in power grid. The article presents a method for determining a correction factor to the long-term allowable current of the cable, which allows for this influence. Using mathematical models in the software Elcut, it was described thermal processes in the cable in case the flow of non-sinusoidal current. Developed in the article theoretical principles, methods, mathematical models allow us to calculate the correction factor to account for the effect of higher harmonics in the current spectrum for network equipment in any type of non-linear load.

  6. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  7. Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited

    DOE PAGES

    Farmer, William A.; Friedman, Alex

    2015-06-18

    Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less

  8. Multiple robustness in factorized likelihood models.

    PubMed

    Molina, J; Rotnitzky, A; Sued, M; Robins, J M

    2017-09-01

    We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.

  9. Evaluation of thermal network correction program using test temperature data

    NASA Technical Reports Server (NTRS)

    Ishimoto, T.; Fink, L. C.

    1972-01-01

    An evaluation process to determine the accuracy of a computer program for thermal network correction is discussed. The evaluation is required since factors such as inaccuracies of temperatures, insufficient number of temperature points over a specified time period, lack of one-to-one correlation between temperature sensor and nodal locations, and incomplete temperature measurements are not present in the computer-generated information. The mathematical models used in the evaluation are those that describe a physical system composed of both a conventional and a heat pipe platform. A description of the models used, the results of the evaluation of the thermal network correction, and input instructions for the thermal network correction program are presented.

  10. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    NASA Technical Reports Server (NTRS)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-01-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength of burning aerosol sources. Our previous work (Petrenko et al., 2012) shows that satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the assumed source strength. We now refine the satellite-snapshot method and investigate applying simple multiplicative emission correction factors for the widely used Global Fire Emission Database version 3 (GFEDv3) emission inventory can achieve regional-scale consistency between MODIS AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model. The model and satellite AOD are compared over a set of more than 900 BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. The AOD comparison presented here shows that regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. Additional analysis of including small fire emission correction shows the complimentary nature of correcting for source strength and adding missing sources, and also indicates that in some regions other factors may be significant in explaining model-satellite discrepancies. This work sets the stage for a larger intercomparison within the Aerosol Inter-comparisons between Observations and Models (AeroCom) multi-model biomass burning experiment. We discuss here some of the other possible factors affecting the remaining discrepancies between model simulations and observations, but await comparisons with other AeroCom models to draw further conclusions.

  11. Estimating changes in mean body temperature for humans during exercise using core and skin temperatures is inaccurate even with a correction factor.

    PubMed

    Jay, Ollie; Reardon, Francis D; Webb, Paul; Ducharme, Michel B; Ramsay, Tim; Nettlefold, Lindsay; Kenny, Glen P

    2007-08-01

    Changes in mean body temperature (DeltaT(b)) estimated by the traditional two-compartment model of "core" and "shell" temperatures and an adjusted two-compartment model incorporating a correction factor were compared with values derived by whole body calorimetry. Sixty participants (31 men, 29 women) cycled at 40% of peak O(2) consumption for 60 or 90 min in the Snellen calorimeter at 24 or 30 degrees C. The core compartment was represented by esophageal, rectal (T(re)), and aural canal temperature, and the shell compartment was represented by a 12-point mean skin temperature (T(sk)). Using T(re) and conventional core-to-shell weightings (X) of 0.66, 0.79, and 0.90, mean DeltaT(b) estimation error (with 95% confidence interval limits in parentheses) for the traditional model was -95.2% (-83.0, -107.3) to -76.6% (-72.8, -80.5) after 10 min and -47.2% (-40.9, -53.5) to -22.6% (-14.5, -30.7) after 90 min. Using T(re), X = 0.80, and a correction factor (X(0)) of 0.40, mean DeltaT(b) estimation error for the adjusted model was +9.5% (+16.9, +2.1) to -0.3% (+11.9, -12.5) after 10 min and +15.0% (+27.2, +2.8) to -13.7% (-4.2, -23.3) after 90 min. Quadratic analyses of calorimetry DeltaT(b) data was subsequently used to derive best-fitting values of X for both models and X(0) for the adjusted model for each measure of core temperature. The most accurate model at any time point or condition only accounted for 20% of the variation observed in DeltaT(b) for the traditional model and 56% for the adjusted model. In conclusion, throughout exercise the estimation of DeltaT(b) using any measure of core temperature together with mean skin temperature irrespective of weighting is inaccurate even with a correction factor customized for the specific conditions.

  12. Improving global estimates of syphilis in pregnancy by diagnostic test type: A systematic review and meta-analysis.

    PubMed

    Ham, D Cal; Lin, Carol; Newman, Lori; Wijesooriya, N Saman; Kamb, Mary

    2015-06-01

    "Probable active syphilis," is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. To identify more accurate correction factors based on test type reported. Medline search using: "Syphilis [Mesh] and Pregnancy [Mesh]," "Syphilis [Mesh] and Prenatal Diagnosis [Mesh]," and "Syphilis [Mesh] and Antenatal [Keyword]. Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy. Published by Elsevier Ireland Ltd.

  13. SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watt, E; Spencer, DP; Meyer, T

    Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less

  14. 1/ f noise from the laws of thermodynamics for finite-size fluctuations.

    PubMed

    Chamberlin, Ralph V; Nasir, Derek M

    2014-07-01

    Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.

  15. Sandmeier model based topographic correction to lunar spectral profiler (SP) data from KAGUYA satellite.

    PubMed

    Chen, Sheng-Bo; Wang, Jing-Ran; Guo, Peng-Ju; Wang, Ming-Chang

    2014-09-01

    The Moon may be considered as the frontier base for the deep space exploration. The spectral analysis is one of the key techniques to determine the lunar surface rock and mineral compositions. But the lunar topographic relief is more remarkable than that of the Earth. It is necessary to conduct the topographic correction for lunar spectral data before they are used to retrieve the compositions. In the present paper, a lunar Sandmeier model was proposed by considering the radiance effect from the macro and ambient topographic relief. And the reflectance correction model was also reduced based on the Sandmeier model. The Spectral Profile (SP) data from KAGUYA satellite in the Sinus Iridum quadrangle was taken as an example. And the digital elevation data from Lunar Orbiter Laser Altimeter are used to calculate the slope, aspect, incidence and emergence angles, and terrain-viewing factor for the topographic correction Thus, the lunar surface reflectance from the SP data was corrected by the proposed model after the direct component of irradiance on a horizontal surface was derived. As a result, the high spectral reflectance facing the sun is decreased and low spectral reflectance back to the sun is compensated. The statistical histogram of reflectance-corrected pixel numbers presents Gaussian distribution Therefore, the model is robust to correct lunar topographic effect and estimate lunar surface reflectance.

  16. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  17. The Kroll-Lee-Zumino Model and Pion Form Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dominguez, C. A.; Loewe, M.

    2010-08-04

    At the one loop level, we make use of the renormalizable Abelian quantum field theory model of Kroll, Lee, and Zumino (KLZ) in order to compute the vertex corrections to the tree-level, Vector Meson Dominance (VMD) electromagnetic pion form factor. This result, together with the one-loop vacuum polarization contribution, implies an electromagnetic pion form factor which is in outstanding agreement with data in the whole range of accessible momentum transfers in the space-like region. The time-like form factor, which reproduces the Gounaris-Sakurai formula at and near the rho-meson peak, remains unaffected by the vertex correction at order O(g{sup 2}). Wemore » also use the KLZ model to compute the pion scalar radius at the one loop level, finding S = 0.40 fm{sup 2}. From this value we find for the low energy constant of chiral perturbation theory l{sub 4} = 3.4.« less

  18. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Revisions to some parameters used in stochastic-method simulations of ground motion

    USGS Publications Warehouse

    Boore, David; Thompson, Eric M.

    2015-01-01

    The stochastic method of ground‐motion simulation specifies the amplitude spectrum as a function of magnitude (M) and distance (R). The manner in which the amplitude spectrum varies with M and R depends on physical‐based parameters that are often constrained by recorded motions for a particular region (e.g., stress parameter, geometrical spreading, quality factor, and crustal amplifications), which we refer to as the seismological model. The remaining ingredient for the stochastic method is the ground‐motion duration. Although the duration obviously affects the character of the ground motion in the time domain, it also significantly affects the response of a single‐degree‐of‐freedom oscillator. Recently published updates to the stochastic method include a new generalized double‐corner‐frequency source model, a new finite‐fault correction, a new parameterization of duration, and a new duration model for active crustal regions. In this article, we augment these updates with a new crustal amplification model and a new duration model for stable continental regions. Random‐vibration theory (RVT) provides a computationally efficient method to compute the peak oscillator response directly from the ground‐motion amplitude spectrum and duration. Because the correction factor used to account for the nonstationarity of the ground motion depends on the ground‐motion amplitude spectrum and duration, we also present new RVT correction factors for both active and stable regions.

  20. The photon fluence non-uniformity correction for air kerma near Cs-137 brachytherapy sources.

    PubMed

    Rodríguez, M L; deAlmeida, C E

    2004-05-07

    The use of brachytherapy sources in radiation oncology requires their proper calibration to guarantee the correctness of the dose delivered to the treatment volume of a patient. One of the elements to take into account in the dose calculation formalism is the non-uniformity of the photon fluence due to the beam divergence that causes a steep dose gradient near the source. The correction factors for this phenomenon have been usually evaluated by the two theories available, both of which were conceived only for point sources. This work presents the Monte Carlo assessment of the non-uniformity correction factors for a Cs-137 linear source and a Farmer-type ionization chamber. The results have clearly demonstrated that for linear sources there are some important differences among the values obtained from different calculation models, especially at short distances from the source. The use of experimental values for each specific source geometry is recommended in order to assess the non-uniformity factors for linear sources in clinical situations that require special dose calculations or when the correctness of treatment planning software is verified during the acceptance tests.

  1. On-road heavy-duty diesel particulate matter emissions modeled using chassis dynamometer data.

    PubMed

    Kear, Tom; Niemeier, D A

    2006-12-15

    This study presents a model, derived from chassis dynamometer test data, for factors (operational correction factors, or OCFs) that correct (g/mi) heavy-duty diesel particle emission rates measured on standard test cycles for real-world conditions. Using a random effects mixed regression model with data from 531 tests of 34 heavy-duty vehicles from the Coordinating Research Council's E55/E59 research project, we specify a model with covariates that characterize high power transient driving, time spent idling, and average speed. Gram per mile particle emissions rates were negatively correlated with high power transient driving, average speed, and time idling. The new model is capable of predicting relative changes in g/mi on-road heavy-duty diesel particle emission rates for real-world driving conditions that are not reflected in the driving cycles used to test heavy-duty vehicles.

  2. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  3. Student Understanding of the Boltzmann Factor

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Mountcastle, Donald B.; Thompson, John R.

    2015-01-01

    We present results of our investigation into student understanding of the physical significance and utility of the Boltzmann factor in several simple models. We identify various justifications, both correct and incorrect, that students use when answering written questions that require application of the Boltzmann factor. Results from written data…

  4. Output factor determination based on Monte Carlo simulation for small cone field in 10-MV photon beam.

    PubMed

    Fukata, Kyohei; Sugimoto, Satoru; Kurokawa, Chie; Saito, Akito; Inoue, Tatsuya; Sasai, Keisuke

    2018-06-01

    The difficulty of measuring output factor (OPF) in a small field has been frequently discussed in recent publications. This study is aimed to determine the OPF in a small field using 10-MV photon beam and stereotactic conical collimator (cone). The OPF was measured by two diode detectors (SFD, EDGE detector) and one micro-ion chamber (PinPoint 3D chamber) in a water phantom. A Monte Carlo simulation using simplified detector model was performed to obtain the correction factor for the detector measurements. About 12% OPF difference was observed in the measurement at the smallest field (7.5 mm diameter) for EDGE detector and PinPoint 3D chamber. By applying the Monte Carlo-based correction factor to the measurement, the maximum discrepancy among the three detectors was reduced to within 3%. The results indicate that determination of OPF in a small field should be carefully performed. Especially, detector choice and appropriate correction factor application are very important in this regard.

  5. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  6. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  7. The Impact of Individual and Institutional Factors on Turnover Intent Among Taiwanese Correctional Staff.

    PubMed

    Lai, Yung-Lien

    2017-01-01

    The existing literature on turnover intent among correctional staff conducted in Western societies focuses on the impact of individual-level factors; the possible effects of institutional contexts have been largely overlooked. Moreover, the relationships of various multidimensional conceptualizations of both job satisfaction and organizational commitment to turnover intent are still largely unknown. Using data collected by a self-reported survey of 676 custody staff employed in 22 Taiwanese correctional facilities during April of 2011, the present study expands upon theoretical models developed in Western societies and examines the effects of both individual and institutional factors on turnover intent simultaneously. Results from the use of the hierarchical linear modeling (HLM) statistical method indicate that, at the individual-level, supervisory versus non-supervisory status, job stress, job dangerousness, job satisfaction, and organizational commitment consistently produce a significant association with turnover intent after controlling for personal characteristics. Specifically, three distinct forms of organizational commitment demonstrated an inverse impact on turnover intent. Among institutional-level variables, custody staff who came from a larger facility reported higher likelihood of thinking about quitting their job. © The Author(s) 2015.

  8. Topside correction of IRI by global modeling of ionospheric scale height using COSMIC radio occultation data

    NASA Astrophysics Data System (ADS)

    Wu, M. J.; Guo, P.; Fu, N. F.; Xu, T. L.; Xu, X. S.; Jin, H. L.; Hu, X. G.

    2016-06-01

    The ionosphere scale height is one of the most significant ionospheric parameters, which contains information about the ion and electron temperatures and dynamics in upper ionosphere. In this paper, an empirical orthogonal function (EOF) analysis method is applied to process all the ionospheric radio occultations of GPS/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) from the year 2007 to 2011 to reconstruct a global ionospheric scale height model. This monthly medium model has spatial resolution of 5° in geomagnetic latitude (-87.5° ~ 87.5°) and temporal resolution of 2 h in local time. EOF analysis preserves the characteristics of scale height quite well in the geomagnetic latitudinal, anural, seasonal, and diurnal variations. In comparison with COSMIC measurements of the year of 2012, the reconstructed model indicates a reasonable accuracy. In order to improve the topside model of International Reference Ionosphere (IRI), we attempted to adopt the scale height model in the Bent topside model by applying a scale factor q as an additional constraint. With the factor q functioning in the exponent profile of topside ionosphere, the IRI scale height should be forced equal to the precise COSMIC measurements. In this way, the IRI topside profile can be improved to get closer to the realistic density profiles. Internal quality check of this approach is carried out by comparing COSMIC realistic measurements and IRI with or without correction, respectively. In general, the initial IRI model overestimates the topside electron density to some extent, and with the correction introduced by COSMIC scale height model, the deviation of vertical total electron content (VTEC) between them is reduced. Furthermore, independent validation with Global Ionospheric Maps VTEC implies a reasonable improvement in the IRI VTEC with the topside model correction.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C.

    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a referencemore » beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the quantum noise correction factor reduced the difference from the model to the real NPS to generally within 4%. The use of the quantum noise correction improved the conversion of ASEh image to CRc image but had no difference for the conversion to CSI images. Conclusions: A practical method for estimating the NPS at any dose and over a range of beam qualities for mammography has been demonstrated. The noise model was incorporated into a methodology for converting an image to appear as if acquired on a different detector. The method can now be extended to work for a wide range of beam qualities and can be applied to the conversion of mammograms.« less

  10. Ion penetration depth in the plant cell wall

    NASA Astrophysics Data System (ADS)

    Yu, L. D.; Vilaithong, T.; Phanchaisri, B.; Apavatjrut, P.; Anuntalabhochai, S.; Evans, P.; Brown, I. G.

    2003-05-01

    This study investigates the depth of ion penetration in plant cell wall material. Based on the biological structure of the plant cell wall, a physical model is proposed which assumes that the wall is composed of randomly orientated layers of cylindrical microfibrils made from cellulose molecules of C 6H 12O 6. With this model, we have determined numerical factors for ion implantation in the plant cell wall to correct values calculated from conventional ion implantation programs. Using these correction factors, it is possible to apply common ion implantation programs to estimate the ion penetration depth in the cell for bioengineering purposes. These estimates are compared with measured data from experiments and good agreement is achieved.

  11. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  12. A study of ionospheric grid modification technique for BDS/GPS receiver

    NASA Astrophysics Data System (ADS)

    Liu, Xuelin; Li, Meina; Zhang, Lei

    2017-07-01

    For the single-frequency GPS receiver, ionospheric delay is an important factor affecting the positioning performance. There are many kinds of ionospheric correction methods, common models are Bent model, IRI model, Klobuchar model, Ne Quick model and so on. The US Global Positioning System (GPS) uses the Klobuchar coefficients transmitted in the satellite signal to correct the ionospheric delay error for a single frequency GPS receiver, but this model can only reduce the ionospheric error of about 50% in the mid-latitudes. In the Beidou system, the accuracy of the correction delay is higher. Therefore, this paper proposes a method that using BD grid information to correct GPS ionospheric delay to improve the ionospheric delay for the BDS/GPS compatible positioning receiver. In this paper, the principle of ionospheric grid algorithm is introduced in detail, and the positioning accuracy of GPS system and BDS/GPS compatible positioning system is compared and analyzed by the real measured data. The results show that the method can effectively improve the positioning accuracy of the receiver in a more concise way.

  13. Spectral dependence on the correction factor of erythemal UV for cloud, aerosol, total ozone, and surface properties: A modeling study

    NASA Astrophysics Data System (ADS)

    Park, Sang Seo; Jung, Yeonjin; Lee, Yun Gon

    2016-07-01

    Radiative transfer model simulations were used to investigate the erythemal ultraviolet (EUV) correction factors by separating the UV-A and UV-B spectral ranges. The correction factor was defined as the ratio of EUV caused by changing the amounts and characteristics of the extinction and scattering materials. The EUV correction factors (CFEUV) for UV-A [CFEUV(A)] and UV-B [CFEUV(B)] were affected by changes in the total ozone, optical depths of aerosol and cloud, and the solar zenith angle. The differences between CFEUV(A) and CFEUV(B) were also estimated as a function of solar zenith angle, the optical depths of aerosol and cloud, and total ozone. The differences between CFEUV(A) and CFEUV(B) ranged from -5.0% to 25.0% for aerosols, and from -9.5% to 2.0% for clouds in all simulations for different solar zenith angles and optical depths of aerosol and cloud. The rate of decline of CFEUV per unit optical depth between UV-A and UV-B differed by up to 20% for the same aerosol and cloud conditions. For total ozone, the variation in CFEUV(A) was negligible compared with that in CFEUV(B) because of the effective spectral range of the ozone absorption band. In addition, the sensitivity of the CFEUVs due to changes in surface conditions (i.e., surface albedo and surface altitude) was also estimated by using the model in this study. For changes in surface albedo, the sensitivity of the CFEUVs was 2.9%-4.1% per 0.1 albedo change, depending on the amount of aerosols or clouds. For changes in surface altitude, the sensitivity of CFEUV(B) was twice that of CFEUV(A), because the Rayleigh optical depth increased significantly at shorter wavelengths.

  14. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  15. Monte Carlo modeling of fluorescence in semi-infinite turbid media

    NASA Astrophysics Data System (ADS)

    Ong, Yi Hong; Finlay, Jarod C.; Zhu, Timothy C.

    2018-02-01

    The incident field size and the interplay of absorption and scattering can influence the in-vivo light fluence rate distribution and complicate the absolute quantification of fluorophore concentration in-vivo. In this study, we use Monte Carlo simulations to evaluate the effect of incident beam radius and optical properties to the fluorescence signal collected by isotropic detector placed on the tissue surface. The optical properties at the excitation and emission wavelengths are assumed to be identical. We compute correction factors to correct the fluorescence intensity for variations due to incident field size and optical properties. The correction factors are fitted to a 4-parameters empirical correction function and the changes in each parameter are compared for various beam radius over a range of physiologically relevant tissue optical properties (μa = 0.1 - 1 cm-1 , μs'= 5 - 40 cm-1 ).

  16. Experimental evaluation of a mathematical model for predicting transfer efficiency of a high volume-low pressure air spray gun.

    PubMed

    Tan, Y M; Flynn, M R

    2000-10-01

    The transfer efficiency of a spray-painting gun is defined as the amount of coating applied to the workpiece divided by the amount sprayed. Characterizing this transfer process allows for accurate estimation of the overspray generation rate, which is important for determining a spray painter's exposure to airborne contaminants. This study presents an experimental evaluation of a mathematical model for predicting the transfer efficiency of a high volume-low pressure spray gun. The effects of gun-to-surface distance and nozzle pressure on the agreement between the transfer efficiency measurement and prediction were examined. Wind tunnel studies and non-volatile vacuum pump oil in place of commercial paint were used to determine transfer efficiency at nine gun-to-surface distances and four nozzle pressure levels. The mathematical model successfully predicts transfer efficiency within the uncertainty limits. The least squares regression between measured and predicted transfer efficiency has a slope of 0.83 and an intercept of 0.12 (R2 = 0.98). Two correction factors were determined to improve the mathematical model. At higher nozzle pressure settings, 6.5 psig and 5.5 psig, the correction factor is a function of both gun-to-surface distance and nozzle pressure level. At lower nozzle pressures, 4 psig and 2.75 psig, gun-to-surface distance slightly influences the correction factor, while nozzle pressure has no discernible effect.

  17. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  18. The accuracy of parent-reported height and weight for 6-12 year old U.S. children.

    PubMed

    Wright, Davene R; Glanz, Karen; Colburn, Trina; Robson, Shannon M; Saelens, Brian E

    2018-02-12

    Previous studies have examined correlations between BMI calculated using parent-reported and directly-measured child height and weight. The objective of this study was to validate correction factors for parent-reported child measurements. Concordance between parent-reported and investigator measured child height, weight, and BMI (kg/m 2 ) among participants in the Neighborhood Impact on Kids Study (n = 616) was examined using the Lin coefficient, where a value of ±1.0 indicates perfect concordance and a value of zero denotes non-concordance. A correction model for parent-reported height, weight, and BMI based on commonly collected demographic information was developed using 75% of the sample. This model was used to estimate corrected measures for the remaining 25% of the sample and measured concordance between correct parent-reported and investigator-measured values. Accuracy of corrected values in classifying children as overweight/obese was assessed by sensitivity and specificity. Concordance between parent-reported and measured height, weight and BMI was low (0.007, - 0.039, and - 0.005 respectively). Concordance in the corrected test samples improved to 0.752 for height, 0.616 for weight, and 0.227 for BMI. Sensitivity of corrected parent-reported measures for predicting overweight and obesity among children in the test sample decreased from 42.8 to 25.6% while specificity improved from 79.5 to 88.6%. Correction factors improved concordance for height and weight but did not improve the sensitivity of parent-reported measures for measuring child overweight and obesity. Future research should be conducted using larger and more nationally-representative samples that allow researchers to fully explore demographic variance in correction coefficients.

  19. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Bergstrom, P

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose is accurately and consistently delivered to patients.« less

  20. Rigidice Model of Secondary Frost Heave

    DOT National Transportation Integrated Search

    1995-05-01

    A revised version of an earlier attempt to numerically solve Miller's equations for the RIGIDICE model of frost heave is presented that corrects earlier mistakes and incorporates recent improvements in the scaling factors of ground freezing. The new ...

  1. Modeling bias and variation in the stochastic processes of small RNA sequencing

    PubMed Central

    Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-01-01

    Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495

  2. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  3. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. A groundwater data assimilation application study in the Heihe mid-reach

    NASA Astrophysics Data System (ADS)

    Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.

    2017-12-01

    The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.

  5. Uncertainty of relative sensitivity factors in glow discharge mass spectrometry

    NASA Astrophysics Data System (ADS)

    Meija, Juris; Methven, Brad; Sturgeon, Ralph E.

    2017-10-01

    The concept of the relative sensitivity factors required for the correction of the measured ion beam ratios in pin-cell glow discharge mass spectrometry is examined in detail. We propose a data-driven model for predicting the relative response factors, which relies on a non-linear least squares adjustment and analyte/matrix interchangeability phenomena. The model provides a self-consistent set of response factors for any analyte/matrix combination of any element that appears as either an analyte or matrix in at least one known response factor.

  6. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  7. SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Zhang, Y; Li, X

    2014-06-01

    Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further validation with other type of detectors is being conducted.« less

  8. High Energy Phenomenology - Proceedings of the Workshop

    NASA Astrophysics Data System (ADS)

    Pérez, Miguel A.; Huerta, Rodrigo

    1992-06-01

    The Table of Contents for the full book PDF is as follows: * Preface * Radiative Corrections in the Electroweak Standard Model * Introduction * The Electroweak Standard Model and its Renormalization * Basic Properties of the Standard Model * Renormalization of the Standard Model * Calculation of Radiative Corrections * One-Loop Integrals * Corrected Matrix Elements and Cross Sections * Photonic Corrections * Physical Applications and Results * Parameter Relations in Higher Orders * Decay Widths * Z Physics * W-Pair Production * Higgs Production in e+e- Annihilation * Conclusion * Appendix: Feynman Rules * References * Hadron Collider Physics * Introduction * e+ e- Annihilation * The Standard Model * The Drell-Yan Process in Hadronic Collisions * The Structure Functions * Hadronic Z Production * Hadronic W Production * The Transverse Mass * Quark Decays of W's * Weak Interactions * Neutrino Scattering * Weak Neutral Currents * The Standard Model * Symmetries and Lagrangians * Spontaneous Symmetry Breaking * The Standard Model Again * Experimental Situation * Appendix * References * Lectures on Heavy Quark Effective Theory * Introduction * Motivation * Physical Intuition * The Heavy Quark Effective Theory * The Effective Lagrangian and its Feynman Rules * What is an Effective Theory? * The Effective Theory Beyond Tree Level * External Currents * Leading-Logs or No Leading-Logs; A digression * Sample Calculations * Symmetries * Flavor-SU(N) * Spin-SU(2) * Spectrum * Strong Transitions * Covariant Representation of States * Meson Decay Constants * Preliminaries * Formal Derivation: Green Functions * Quick and Dirty Derivation: States in the HQET * Vector Meson Decay Constant * Corrections * Form Factors in overline {B} rightarrow Deν and overline {B} rightarrow D ^ast {e}ν * Preliminaries * Form Factors in the HQET * Form Factors in order αs * 1/MQ * The Correcting Lagrangian * The Corrected Currents * Corrections of order mc/mb * Corrections of order overline {Λ} /m_c and overline {Λ} /m_c * Conclusions and More * Inclusive Semileptonic Decay Rates * overline {B} rightarrow Π {e} overline {ν} and overline {B} rightarrow Π {e} overline {ν} * Rare overline {B} decays * e^+ e^- rightarrow {B} overline {B} * λb → λcDs vs λb → λc D*s * Factorization * A Last Word (or Two) * References * An Overview of Nonleptonic Decays of B, D, K Mesons and CP-Noninvariance * Generic Ways to Study Nonleptonic Decays and CP-Noninvariance * The Quark-Diagram Scheme * Invariants of the CKM and the Universal Decay-Amplitude CP-Noninvariance Factor Xcp * Implications of Measuring Partial-Decay-Rate Asymmetries in B± Decays and in Neutral B Decays such as B0, overline {B}^{0} rightarrow K_sJ/{Ψ} * Nonleptonic Decays of D Mesons: From the CKM Non- and Singly-Suppressed Decays to the Predictions of Doubly-Suppressed Decays * Charm Meson D Decays into Vector and Pseudoscalar Bosons, D → VP * Charm Meson Decays into Pseudoscalar-Pseudoscalar Mesons, D → PP * Charm Meson Decays into Vector-Vector Mesons, D → VV * Nonleptonic Decays of B Mesons * The CKM Non-Suppressed Decays * Interesting Features in the Rare B Meson Decays * CP-Noninvariance in K Meson Decays * Implications of Measurement of Re( ɛ'/ɛ) * Other Important Searches for Decay-Amplitude CP Noninvariance in Strange Particles * Some Generic Properties of Decay-Amplitude CP-Noninvariance * References * Top Quark Physics * Introduction * The Top Quark Exists * Upper Limit on Mt * Other Constraints on Mt * Production of Top * Hadron Colliders * SM Top Decays * Detecting SM Tops-Signatures * Model-Independent Lower Limit on Mt * Determining the Charge of a New Heavy Quark * When the Top Quark is Detected * Top Decays - A Window to New Physics? * - Decay to Supersymmetric Partners * - Decay to Charged Higgs Bosons * - Flavor-Changing Neutral Current Decays * - Other possibilities * New Information Once Top is Observed * Studying the Top Decays Couplings * The Top Quark at N LC * Measuring Mt - How Well? * Sharper Predictions for Many Observables * Measuring Vts, Vtd, Vtb and Γ(t → bW) * Top Polarization Predictions - A New Observable * Testing QCD Polarization Predictions * Correlation of Top Spin Direction with Final b, l+ Directions and Top Mass Measurements * Measuring P_{pm} ^ t * General Top Couplings * One Loop Corrections to Top Decay * Decay Helicity Amplitudes * New Sources of CP Violation at the Weak Scale? * The Effect of Top Loops on Higgs Masses * Is t → Wb a Background for Studying TeV WW Interactions? * Predictions for Mt * Final Remarks * References * High Precision Radiative Corrections in the Semileptonic Decays of Hyperons * On the Decay W± → P±γ * The Decay H0 → γγ and Physics Beyond the Standard Model * Neutrino Masses and Double Beta Decay * Neutrino Oscillations in a Medium: Analytic Calculation of Nonadiabatic Transitions * Gauge-Invariant Perturbation Theory Near a Gauge Resonance * Lower Dimensional Divergences in Gauge Theories * Strange Stars: Which is the Ground State of QCD at Finite Baryon Number? * Experimental Signatures of the SU(5)c Color Model * Generalized Supersymmetric Quantum Mechanics * Chern-Simons Theories in 2 + 1 Dimensions * List of participants

  9. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  10. Linear optics measurements and corrections using an AC dipole in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G.; Bai, M.; Yang, L.

    2010-05-23

    We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.

  11. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    PubMed

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  12. Identification of Terrestrial Reflectance From Remote Sensing

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)

    2000-01-01

    Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.

  13. An advanced method to assess the diet of free-ranging large carnivores based on scats.

    PubMed

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert

    2012-01-01

    The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.

  14. An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats

    PubMed Central

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert

    2012-01-01

    Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373

  15. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.

  16. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  17. Effect of PVA fiber content on creep property of fiber reinforced high-strength concrete columns

    NASA Astrophysics Data System (ADS)

    Xu, Zongnan; Wang, Tao; Wang, Weilun

    2018-04-01

    The effect of PVA (polyvinyl alcohol) fiber content on the creep property of fiber reinforced high-strength concrete columns was investigated. The correction factor of PVA fiber content was proposed and the creep prediction model of ACI209 was modified. Controlling the concrete strength as C80, changing the content of PVA fiber (volume fraction 0%, 0.25%, 0.5%, 1% respectively), the creep experiment of PVA fiber reinforced concrete columns was carried out, the creep coefficient of each specimen was calculated to characterize the creep property. The influence of PVA fiber content on the creep property was analyzed based on the creep coefficient and the calculation results of several frequently used creep prediction models. The correction factor of PVA fiber content was proposed to modify the ACI209 creep prediction model.

  18. NONROAD2008a Installation and Updates

    EPA Pesticide Factsheets

    NONROAD2008 is the overall set of modeling files including the core model, default data files, graphical user interface (GUI), and reporting utility. NONROAD2008a is essentially the same, but with one correction to the NOx emission factor data file.

  19. Design of exchange-correlation functionals through the correlation factor approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlíková Přecechtělová, Jana, E-mail: j.precechtelova@gmail.com, E-mail: Matthias.Ernzerhof@UMontreal.ca; Institut für Chemie, Theoretische Chemie / Quantenchemie, Sekr. C7, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin; Bahmann, Hilke

    The correlation factor model is developed in which the spherically averaged exchange-correlation hole of Kohn-Sham theory is factorized into an exchange hole model and a correlation factor. The exchange hole model reproduces the exact exchange energy per particle. The correlation factor is constructed in such a manner that the exchange-correlation energy correctly reduces to exact exchange in the high density and rapidly varying limits. Four different correlation factor models are presented which satisfy varying sets of physical constraints. Three models are free from empirical adjustments to experimental data, while one correlation factor model draws on one empirical parameter. The correlationmore » factor models are derived in detail and the resulting exchange-correlation holes are analyzed. Furthermore, the exchange-correlation energies obtained from the correlation factor models are employed to calculate total energies, atomization energies, and barrier heights. It is shown that accurate, non-empirical functionals can be constructed building on exact exchange. Avenues for further improvements are outlined as well.« less

  20. Super-global distortion correction for a rotational C-arm x-ray image intensifier.

    PubMed

    Liu, R R; Rudin, S; Bednarek, D R

    1999-09-01

    Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.

  1. Effect of attenuation correction on image quality in emission tomography

    NASA Astrophysics Data System (ADS)

    Denisova, N. V.; Ondar, M. M.

    2017-10-01

    In this paper, mathematical modeling and computer simulations of myocardial perfusion SPECT imaging are performed. The main factors affecting the quality of reconstructed images in SPECT are anatomical structures, the diastolic volume of a myocardium and attenuation of gamma rays. The purpose of the present work is to study the effect of attenuation correction on image quality in emission tomography. The basic 2D model describing a Tc-99m distribution in a transaxial slice of the thoracic part of a patient body was designed. This model was used to construct four phantoms simulated various anatomical shapes: 2 male and 2 female patients with normal, obese and subtle physique were included in the study. Data acquisition model which includes the effect of non-uniform attenuation, collimator-detector response and Poisson statistics was developed. The projection data were calculated for 60 views in accordance with the standard myocardial perfusion SPECT imaging protocol. Reconstructions of images were performed using the OSEM algorithm which is widely used in modern SPECT systems. Two types of patient's examination procedures were simulated: SPECT without attenuation correction and SPECT/CT with attenuation correction. The obtained results indicate a significant effect of the attenuation correction on the SPECT images quality.

  2. A simple model for the critical mass of a nuclear weapon

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-07-01

    A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.

  3. i4OilSpill, an operational marine oil spill forecasting model for Bohai Sea

    NASA Astrophysics Data System (ADS)

    Yu, Fangjie; Yao, Fuxin; Zhao, Yang; Wang, Guansuo; Chen, Ge

    2016-10-01

    Oil spill models can effectively simulate the trajectories and fate of oil slicks, which is an essential element in contingency planning and effective response strategies prepared for oil spill accidents. However, when applied to offshore areas such as the Bohai Sea, the trajectories and fate of oil slicks would be affected by time-varying factors in a regional scale, which are assumed to be constant in most of the present models. In fact, these factors in offshore regions show much more variation over time than in the deep sea, due to offshore bathymetric and climatic characteristics. In this paper, the challenge of parameterizing these offshore factors is tackled. The remote sensing data of the region are used to analyze the modification of wind-induced drift factors, and a well-suited solution is established in parameter correction mechanism for oil spill models. The novelty of the algorithm is the self-adaptive modification mechanism of the drift factors derived from the remote sensing data for the targeted sea region, in respect to empirical constants in the present models. Considering this situation, a new regional oil spill model (i4OilSpill) for the Bohai Sea is developed, which can simulate oil transformation and fate processes by Eulerian-Lagrangian methodology. The forecasting accuracy of the proposed model is proven by the validation results in the comparison between model simulation and subsequent satellite observations on the Penglai 19-3 oil spill accident. The performance of the model parameter correction mechanism is evaluated by comparing with the real spilled oil position extracted from ASAR images.

  4. Brain Substrates of Recovery from Misleading Influence

    PubMed Central

    Dudai, Yadin; Dolan, Raymond J.; Sharot, Tali

    2014-01-01

    Humans are strongly influenced by their environment, a dependence that can lead to errors in judgment. Although a rich literature describes how people are influenced by others, little is known regarding the factors that predict subsequent rectification of misleading influence. Using a mediation model in combination with brain imaging, we propose a model for the correction of misinformation. Specifically, our data suggest that amygdala modulation of hippocampal mnemonic representations, during the time of misleading social influence, is associated with reduced subsequent anterior–lateral prefrontal cortex activity that reflects correction. These findings illuminate the process by which erroneous beliefs are, or fail to be, rectified and highlight how past influence constrains subsequent correction. PMID:24899698

  5. Extended hybrid-space SENSE for EPI: Off-resonance and eddy current corrected joint interleaved blip-up/down reconstruction.

    PubMed

    Zahneisen, Benjamin; Aksoy, Murat; Maclaren, Julian; Wuerslin, Christian; Bammer, Roland

    2017-06-01

    Geometric distortions along the phase encode direction caused by off-resonant spins are still a major issue in EPI based functional and diffusion imaging. If the off-resonance map is known it is possible to correct for distortions. Most correction methods operate as a post-processing step on the reconstructed magnitude images. Here, we present an algebraic reconstruction method (hybrid-space SENSE) that incorporates a physics based model of off-resonances, phase inconsistencies between k-space segments, and T2*-decay during the acquisition. The method can be used to perform a joint reconstruction of interleaved acquisitions with normal (blip-up) and inverted (blip-down) phase encode direction which results in reduced g-factor penalty. A joint blip-up/down simultaneous multi slice (SMS) reconstruction for SMS-factor 4 in combination with twofold in-plane acceleration leads to a factor of two decrease in maximum g-factor penalty while providing off-resonance and eddy-current corrected images. We provide an algebraic framework for reconstructing diffusion weighted EPI data that in addition to the general applicability of hybrid-space SENSE to 2D-EPI, SMS-EPI and 3D-EPI with arbitrary k-space coverage along z, allows for a modeling of arbitrary spatio-temporal effects during the acquisition period like off-resonances, phase inconsistencies and T2*-decay. The most immediate benefit is a reduction in g-factor penalty if an interleaved blip-up/down acquisition strategy is chosen which facilitates eddy current estimation and ensures no loss in k-space encoding in regions with strong off-resonance gradients. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Calabi-Yau structures on categories of matrix factorizations

    NASA Astrophysics Data System (ADS)

    Shklyarov, Dmytro

    2017-09-01

    Using tools of complex geometry, we construct explicit proper Calabi-Yau structures, that is, non-degenerate cyclic cocycles on differential graded categories of matrix factorizations of regular functions with isolated critical points. The formulas involve the Kapustin-Li trace and its higher corrections. From the physics perspective, our result yields explicit 'off-shell' models for categories of topological D-branes in B-twisted Landau-Ginzburg models.

  7. Dose-to-water conversion for the backscatter-shielded EPID: A frame-based method to correct for EPID energy response to MLC transmitted radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.

    2014-08-15

    Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less

  8. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  9. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  10. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    NASA Astrophysics Data System (ADS)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  11. Correction of autofluorescence intensity for epithelial scattering by optical coherence tomography: a phantom study

    NASA Astrophysics Data System (ADS)

    Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.

    2013-03-01

    In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.

  12. Factors Influencing the Completion of the GED in a Federal Correctional Setting a Multiple Regression Correlation-Predictive Study

    ERIC Educational Resources Information Center

    Akers, Kimberly

    2013-01-01

    Correctional education's primary goal is to reduce recidivism and increase employment among ex-offenders. The Bureau of Prison's practical goal in its mandatory GED program is to maximize the number of inmates obtaining the GED in a given time period. The purpose of this research is to model the number of instructional hours an inmate requires to…

  13. Monte Carlo simulated corrections for beam commissioning measurements with circular and MLC shaped fields on the CyberKnife M6 System: a study including diode, microchamber, point scintillator, and synthetic microdiamond detectors.

    PubMed

    Francescon, P; Kilby, W; Noll, J M; Masi, L; Satariano, N; Russo, S

    2017-02-07

    Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between  -6.1% and  -3.5% for the diode models evaluated, while in a 7.6 mm  ×  7.7 mm MLC field these are  -4.5% to  -1.8%. The corresponding microchamber corrections are  +9.9% to  +10.7% and  +3.5% to  +4.0%. The microdiamond corrections have a maximum of  -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are  <1% in all beams. Measured OF showed uncorrected inter-detector differences  >15%, reducing to  <3% after correction. PDD corrections at d  >  d max were  <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were  <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.

  14. Monte Carlo simulated corrections for beam commissioning measurements with circular and MLC shaped fields on the CyberKnife M6 System: a study including diode, microchamber, point scintillator, and synthetic microdiamond detectors

    NASA Astrophysics Data System (ADS)

    Francescon, P.; Kilby, W.; Noll, J. M.; Masi, L.; Satariano, N.; Russo, S.

    2017-02-01

    Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between  -6.1% and  -3.5% for the diode models evaluated, while in a 7.6 mm  ×  7.7 mm MLC field these are  -4.5% to  -1.8%. The corresponding microchamber corrections are  +9.9% to  +10.7% and  +3.5% to  +4.0%. The microdiamond corrections have a maximum of  -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are  <1% in all beams. Measured OF showed uncorrected inter-detector differences  >15%, reducing to  <3% after correction. PDD corrections at d  >  d max were  <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were  <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.

  15. Measurement and modeling of out-of-field doses from various advanced post-mastectomy radiotherapy techniques

    NASA Astrophysics Data System (ADS)

    Yoon, Jihyung; Heins, David; Zhao, Xiaodong; Sanders, Mary; Zhang, Rui

    2017-12-01

    More and more advanced radiotherapy techniques have been adopted for post-mastectomy radiotherapies (PMRT). Patient dose reconstruction is challenging for these advanced techniques because they increase the low out-of-field dose area while the accuracy of out-of-field dose calculations by current commercial treatment planning systems (TPSs) is poor. We aim to measure and model the out-of-field radiation doses from various advanced PMRT techniques. PMRT treatment plans for an anthropomorphic phantom were generated, including volumetric modulated arc therapy with standard and flattening-filter-free photon beams, mixed beam therapy, 4-field intensity modulated radiation therapy (IMRT), and tomotherapy. We measured doses in the phantom where the TPS calculated doses were lower than 5% of the prescription dose using thermoluminescent dosimeters (TLD). The TLD measurements were corrected by two additional energy correction factors, namely out-of-beam out-of-field (OBOF) correction factor K OBOF and in-beam out-of-field (IBOF) correction factor K IBOF, which were determined by separate measurements using an ion chamber and TLD. A simple analytical model was developed to predict out-of-field dose as a function of distance from the field edge for each PMRT technique. The root mean square discrepancies between measured and calculated out-of-field doses were within 0.66 cGy Gy-1 for all techniques. The IBOF doses were highly scattered and should be evaluated case by case. One can easily combine the measured out-of-field dose here with the in-field dose calculated by the local TPS to reconstruct organ doses for a specific PMRT patient if the same treatment apparatus and technique were used.

  16. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  17. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    NASA Technical Reports Server (NTRS)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  18. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  19. Applications of multivariate modeling to neuroimaging group analysis: a comprehensive alternative to univariate general linear model.

    PubMed

    Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W

    2014-10-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in the field, and the following two issues were raised: a) the improper formulation of test statistics in some univariate GLM implementations when a within-subject factor is involved in a data structure with two or more factors, and b) the unjustified presumption of uniform sphericity violation and the practice of estimating the variance-covariance structure through pooling across brain regions. Published by Elsevier Inc.

  20. Dosimetry for Small and Nonstandard Fields

    NASA Astrophysics Data System (ADS)

    Junell, Stephanie L.

    The proposed small and non-standard field dosimetry protocol from the joint International Atomic Energy Agency (IAEA) and American Association of Physicist in Medicine working group introduces new reference field conditions for ionization chamber based reference dosimetry. Absorbed dose beam quality conversion factors (kQ factors) corresponding to this formalism were determined for three different models of ionization chambers: a Farmer-type ionization chamber, a thimble ionization chamber, and a small volume ionization chamber. Beam quality correction factor measurements were made in a specially developed cylindrical polymethyl methacrylate (PMMA) phantom and a water phantom using thermoluminescent dosimeters (TLDs) and alanine dosimeters to determine dose to water. The TLD system for absorbed dose to water determination in high energy photon and electron beams was fully characterized as part of this dissertation. The behavior of the beam quality correction factor was observed as it transfers the calibration coefficient from the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) 60Co reference beam to the small field calibration conditions of the small field formalism. TLD-determined beam quality correction factors for the calibration conditions investigated ranged from 0.97 to 1.30 and had associated standard deviations from 1% to 3%. The alanine-determined beam quality correction factors ranged from 0.996 to 1.293. Volume averaging effects were observed with the Farmer-type ionization chamber in the small static field conditions. The proposed small and non-standard field dosimetry protocols new composite-field reference condition demonstrated its potential to reduce or remove ionization chamber volume dependancies, but the measured beam quality correction factors were not equal to the standard CoP's kQ, indicating a change in beam quality in the small and non-standard field dosimetry protocols new composite-field reference condition relative to the standard broad beam reference conditions. The TLD- and alanine-determined beam quality correction factors in the composite-field reference conditions were approximately 3% greater and differed by more than one standard deviation from the published TG-51 kQ values for all three chambers.

  1. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  2. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  3. Non-model-based correction of respiratory motion using beat-to-beat 3D spiral fat-selective imaging.

    PubMed

    Keegan, Jennifer; Gatehouse, Peter D; Yang, Guang-Zhong; Firmin, David N

    2007-09-01

    To demonstrate the feasibility of retrospective beat-to-beat correction of respiratory motion, without the need for a respiratory motion model. A high-resolution three-dimensional (3D) spiral black-blood scan of the right coronary artery (RCA) of six healthy volunteers was acquired over 160 cardiac cycles without respiratory gating. One spiral interleaf was acquired per cardiac cycle, prior to each of which a complete low-resolution fat-selective 3D spiral dataset was acquired. The respiratory motion (3D translation) on each cardiac cycle was determined by cross-correlating a region of interest (ROI) in the fat around the artery in the low-resolution datasets with that on a reference end-expiratory dataset. The measured translations were used to correct the raw data of the high-resolution spiral interleaves. Beat-to-beat correction provided consistently good results, with the image quality being better than that obtained with a fixed superior-inferior tracking factor of 0.6 and better than (N = 5) or equal to (N = 1) that achieved using a subject-specific retrospective 3D translation motion model. Non-model-based correction of respiratory motion using 3D spiral fat-selective imaging is feasible, and in this small group of volunteers produced better-quality images than a subject-specific retrospective 3D translation motion model. (c) 2007 Wiley-Liss, Inc.

  4. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  5. Tensor algebra-based geometric methodology to codify central chirality on organic molecules.

    PubMed

    García-Jacas, C R; Marrero-Ponce, Y; Hernández-Ortega, T; Martinez-Mayorga, K; Cabrera-Leyva, L; Ledesma-Romero, J C; Aguilera-Fernández, I; Rodríguez-León, A R

    2017-06-01

    A novel mathematical procedure to codify chiral features of organic molecules in the QuBiLS-MIDAS framework is introduced. This procedure constitutes a generalization to that commonly used to date, where the values 1 and -1 (correction factor) are employed to weight the molecular vectors when each atom is labelled as R (rectus) or S (sinister) according to the Cahn-Ingold-Prelog rules. Therefore, values in the range [Formula: see text] with steps equal to 0.25 may be accounted for. The atoms labelled R or S can have negative and positive values assigned (e.g. -3 for an R atom and 1 for an S atom, or vice versa), opposed values (e.g. -3 for an R atom and 3 for an S atom, or vice versa), positive values (e.g. 3 for an R atom and 1 for an S atom) or negative values (e.g. -3 for an R atom and -1 for an S atom). These proposed Chiral QuBiLS-MIDAS 3D-MDs are real numbers, non-symmetric and reduced to 'classical' (non-chiral) QuBiLS-MIDAS 3D-MDs when symmetry is not codified (correction factor equal to zero). In this report, only the factors with opposed values were considered with the purpose of demonstrating the feasibility of this proposal. From QSAR modelling carried out on four chemical datasets (Cramer's steroids, fenoterol stereoisomer derivatives, N-alkylated 3-(3-hydroxyphenyl)-piperidines, and perindoprilat stereoisomers), it was demonstrated that the use of several correction factors contributes to the building of models with greater robustness and predictive ability than those reported in the literature, as well as with respect to the models exclusively developed with QuBiLS-MIDAS 3D-MDs based on the factor 1 | -1. In conclusion, it can be stated that this novel strategy constitutes a suitable alternative to computed chirality-based descriptors, contributing to the development of good models to predict properties depending on symmetry.

  6. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    NASA Astrophysics Data System (ADS)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  7. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  8. Efficiency of single-particle engines

    NASA Astrophysics Data System (ADS)

    Proesmans, Karel; Driesen, Cedric; Cleuren, Bart; Van den Broeck, Christian

    2015-09-01

    We study the efficiency of a single-particle Szilard and Carnot engine. Within a first order correction to the quasistatic limit, the work distribution is found to be Gaussian and the correction factor to average work and efficiency only depends on the piston speed. The stochastic efficiency is studied for both models and the recent findings on efficiency fluctuations are confirmed numerically. Special features are revealed in the zero-temperature limit.

  9. Topographic Correction Module at Storm (TC@Storm)

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Cotar, K.; Veljanovski, T.; Pehani, P.; Ostir, K.

    2015-04-01

    Different solar position in combination with terrain slope and aspect result in different illumination of inclined surfaces. Therefore, the retrieved satellite data cannot be accurately transformed to the spectral reflectance, which depends only on the land cover. The topographic correction should remove this effect and enable further automatic processing of higher level products. The topographic correction TC@STORM was developed as a module within the SPACE-SI automatic near-real-time image processing chain STORM. It combines physical approach with the standard Minnaert method. The total irradiance is modelled as a three-component irradiance: direct (dependent on incidence angle, sun zenith angle and slope), diffuse from the sky (dependent mainly on sky-view factor), and diffuse reflected from the terrain (dependent on sky-view factor and albedo). For computation of diffuse irradiation from the sky we assume an anisotropic brightness of the sky. We iteratively estimate a linear combination from 10 different models, to provide the best results. Dependent on the data resolution, we mask shades based on radiometric (image) or geometric properties. The method was tested on RapidEye, Landsat 8, and PROBA-V data. Final results of the correction were evaluated and statistically validated based on various topography settings and land cover classes. Images show great improvements in shaded areas.

  10. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    PubMed

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  11. Aeroelastic modeling for the FIT team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.

  12. Space-based retrieval of NO2 over biomass burning regions: quantifying and reducing uncertainties

    NASA Astrophysics Data System (ADS)

    Bousserez, N.

    2014-10-01

    The accuracy of space-based nitrogen dioxide (NO2) retrievals from solar backscatter radiances critically depends on a priori knowledge of the vertical profiles of NO2 and aerosol optical properties. This information is used to calculate an air mass factor (AMF), which accounts for atmospheric scattering and is used to convert the measured line-of-sight "slant" columns into vertical columns. In this study we investigate the impact of biomass burning emissions on the AMF in order to quantify NO2 retrieval errors in the Ozone Monitoring Instrument (OMI) products over these sources. Sensitivity analyses are conducted using the Linearized Discrete Ordinate Radiative Transfer (LIDORT) model. The NO2 and aerosol profiles are obtained from a 3-D chemistry-transport model (GEOS-Chem), which uses the Fire Locating and Monitoring of Burning Emissions (FLAMBE) daily biomass burning emission inventory. Aircraft in situ data collected during two field campaigns, the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) and the Dust and Biomass-burning Experiment (DABEX), are used to evaluate the modeled aerosol optical properties and NO2 profiles over Canadian boreal fires and West African savanna fires, respectively. Over both domains, the effect of biomass burning emissions on the AMF through the modified NO2 shape factor can be as high as -60%. A sensitivity analysis also revealed that the effect of aerosol and shape factor perturbations on the AMF is very sensitive to surface reflectance and clouds. As an illustration, the aerosol correction can range from -20 to +100% for different surface reflectances, while the shape factor correction varies from -70 to -20%. Although previous studies have shown that in clear-sky conditions the effect of aerosols on the AMF was in part implicitly accounted for by the modified cloud parameters, here it is suggested that when clouds are present above a surface layer of scattering aerosols, an explicit aerosol correction would be beneficial to the NO2 retrieval. Finally, a new method that uses slant column information to correct for shape-factor-related AMF error over NOx emission sources is proposed, with possible application to near-real-time OMI retrievals.

  13. Spectroscopic Factors from the Single Neutron Pickup ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Towner, I. S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.

    2010-11-01

    A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed ° decay of ^62Ga will be discussed.

  14. Development of a Geomagnetic Storm Correction to the International Reference Ionosphere E-Region Electron Densities Using TIMED/SABER Observations

    NASA Technical Reports Server (NTRS)

    Mertens, C. J.; Xu, X.; Fernandez, J. R.; Bilitza, D.; Russell, J. M., III; Mlynczak, M. G.

    2009-01-01

    Auroral infrared emission observed from the TIMED/SABER broadband 4.3 micron channel is used to develop an empirical geomagnetic storm correction to the International Reference Ionosphere (IRI) E-region electron densities. The observation-based proxy used to develop the storm model is SABER-derived NO+(v) 4.3 micron volume emission rates (VER). A correction factor is defined as the ratio of storm-time NO+(v) 4.3 micron VER to a quiet-time climatological averaged NO+(v) 4.3 micron VER, which is linearly fit to available geomagnetic activity indices. The initial version of the E-region storm model, called STORM-E, is most applicable within the auroral oval region. The STORM-E predictions of E-region electron densities are compared to incoherent scatter radar electron density measurements during the Halloween 2003 storm events. Future STORM-E updates will extend the model outside the auroral oval.

  15. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  16. Factors influencing behavior and transferability of habitat models for a benthic stream fish

    Treesearch

    Kevin N. Leftwich; Paul L. Angermeier; C. Andrew Dolloff

    1997-01-01

    The authors examined the predictive power and transferability of habitat-based models by comparing associations of tangerine darter Percina aurantiaca and stream habitat at local and regional scales in North Fork Holston River (NFHR) and Little River, VA. The models correctly predicted the presence or absence of tangerine darters in NFHR for 64 percent (local model)...

  17. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  18. Surface-wave amplitude analysis for array data with non-linear waveform fitting: Toward high-resolution attenuation models of the upper mantle

    NASA Astrophysics Data System (ADS)

    Hamada, K.; Yoshizawa, K.

    2013-12-01

    Anelastic attenuation of seismic waves provides us with valuable information on temperature and water content in the Earth's mantle. While seismic velocity models have been investigated by many researchers, anelastic attenuation (or Q) models have yet to be investigated in detail mainly due to the intrinsic difficulties and uncertainties in the amplitude analysis of observed seismic waveforms. To increase the horizontal resolution of surface wave attenuation models on a regional scale, we have developed a new method of fully non-linear waveform fitting to measure inter-station phase velocities and amplitude ratios simultaneously, using the Neighborhood Algorithm (NA) as a global optimizer. Model parameter space (perturbations of phase speed and amplitude ratio) is explored to fit two observed waveforms on a common great-circle path by perturbing both phase and amplitude of the fundamental-mode surface waves. This method has been applied to observed waveform data of the USArray from 2007 to 2008, and a large-number of inter-station amplitude and phase speed data are corrected in a period range from 20 to 200 seconds. We have constructed preliminary phase speed and attenuation models using the observed phase and amplitude data, with careful considerations of the effects of elastic focusing and station correction factors for amplitude data. The phase velocity models indicate good correlation with the conventional tomographic results in North America on a large-scale; e.g., significant slow velocity anomaly in volcanic regions in the western United States. The preliminary results of surface-wave attenuation achieved a better variance reduction when the amplitude data are inverted for attenuation models in conjunction with corrections for receiver factors. We have also taken into account the amplitude correction for elastic focusing based on a geometrical ray theory, but its effects on the final model is somewhat limited and our attenuation model show anti-correlation with the phase velocity models; i.e., lower attenuation is found in slower velocity areas that cannot readily be explained by the temperature effects alone. Some former global scale studies (e.g., Dalton et al., JGR, 2006) indicated that the ray-theoretical focusing corrections on amplitude data tend to eliminate such anti-correlation of phase speed and attenuation, but this seems not to work sufficiently well for our regional scale model, which is affected by stronger velocity gradient relative to global-scale models. Thus, the estimated elastic focusing effects based on ray theory may be underestimated in our regional-scale studies. More rigorous ways to estimate the focusing corrections as well as data selection criteria for amplitude measurements are required to achieve a high-resolution attenuation models on regional scales in the future.

  19. An evaluation of the carbon balance technique for estimating emission factors and fuel consumption in forest fires

    Treesearch

    Nelson, Jr. Ralph M.

    1982-01-01

    Eighteen experimental fires were used to compare measured and calculated values for emission factors and fuel consumption to evaluate the carbon balance technique. The technique is based on a model for the emission factor of carbon dioxide, corrected for the production of other emissions, and which requires measurements of effluent concentrations and air volume in the...

  20. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  1. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  2. Source and Message Factors in Persuasion: A Reply to Stiff's Critique of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Petty, Richard E.; And Others

    1987-01-01

    Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…

  3. Connecting dark matter annihilation to the vertex functions of Standard Model fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jason; Light, Christopher, E-mail: jkumar@hawaii.edu, E-mail: lightc@hawaii.edu

    We consider scenarios in which dark matter is a Majorana fermion which couples to Standard Model fermions through the exchange of charged mediating particles. The matrix elements for various dark matter annihilation processes are then related to one-loop corrections to the fermion-photon vertex, where dark matter and the charged mediators run in the loop. In particular, in the limit where Standard Model fermion helicity mixing is suppressed, the cross section for dark matter annihilation to various final states is related to corrections to the Standard Model fermion charge form factor. These corrections can be extracted in a gauge-invariant manner frommore » collider cross sections. Although current measurements from colliders are not precise enough to provide useful constraints on dark matter annihilation, improved measurements at future experiments, such as the International Linear Collider, could improve these constraints by several orders of magnitude, allowing them to surpass the limits obtainable by direct observation.« less

  4. Electrical resistivity of liquid lanthanides using charge hard sphere system

    NASA Astrophysics Data System (ADS)

    Sonvane, Y. A.; Thakor, P. B.; Jani, A. R.

    2013-06-01

    In the present paper, we have studied electrical resistivity (ρ) of liquid lanthanides. To describe the structural information, the structure factor S(q) due to the charged hard sphere (CHS) reference systems is used along with our newly constructed model potential. To see the influence of exchange and correlation effect on the electrical resistivity (ρ) have used different local field correction functions like Hartree (H), Sarkar et al (S) and Taylor (T). Lastly we conclude that the proper choice of the model potential along with local field correction function plays a vital role to the study of the electrical resistivity (ρ).

  5. [Risk factors negatively affecting on the formation of musculoskeletal system in children and adolescents in the present conditions].

    PubMed

    Mirskaya, N B

    2013-01-01

    Identifying risk factors affecting the formation of the musculoskeletal system (MSS) in children and adolescents is considered by the author as a necessary condition for the implementation of prevention, timely diagnosis and adequate correction of the MSS disorders and diseases. Introduction in the educational process developed by the author for the first time a conceptual model of prevention and correction of the MSS disorders and diseases in schoolchildren allowed significantly reduce the prevalence of functional disorders and early forms of the MSS diseases in students of a number of comprehensive schools in Moscow by 50%.

  6. Higgs radiation off top quarks at the Tevatron and the LHC.

    PubMed

    Beenakker, W; Dittmaier, S; Krämer, M; Plümper, B; Spira, M; Zerwas, P M

    2001-11-12

    Higgs bosons can be searched for in the channels pp macro/pp-->tt macro H + X at the Fermilab Tevatron and the Cern Large Hadron Collider (LHC). We have calculated the QCD corrections to these processes in the standard model at next-to-leading order. The higher-order corrections reduce the renormalization and factorization scale dependence considerably and stabilize the theoretical predictions for the cross sections. At the central scale mu = (2m(t)+M(H))/2 the properly defined K factors are slightly below unity for the Tevatron (K approximately 0.8) and slightly above unity for the LHC (K approximately 1.2).

  7. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  8. Determination of the KQclinfclin,Qmsr fmsr correction factors for detectors used with an 800 MU/min CyberKnife(®) system equipped with fixed collimators and a study of detector response to small photon beams using a Monte Carlo method.

    PubMed

    Moignier, C; Huet, C; Makovicka, L

    2014-07-01

    In a previous work, output ratio (ORdet) measurements were performed for the 800 MU/min CyberKnife(®) at the Oscar Lambret Center (COL, France) using several commercially available detectors as well as using two passive dosimeters (EBT2 radiochromic film and micro-LiF TLD-700). The primary aim of the present work was to determine by Monte Carlo calculations the output factor in water (OFMC,w) and the [Formula: see text] correction factors. The secondary aim was to study the detector response in small beams using Monte Carlo simulation. The LINAC head of the CyberKnife(®) was modeled using the PENELOPE Monte Carlo code system. The primary electron beam was modeled using a monoenergetic source with a radial gaussian distribution. The model was adjusted by comparisons between calculated and measured lateral profiles and tissue-phantom ratios obtained with the largest field. In addition, the PTW 60016 and 60017 diodes, PTW 60003 diamond, and micro-LiF were modeled. Output ratios with modeled detectors (ORMC,det) and OFMC,w were calculated and compared to measurements, in order to validate the model for smallest fields and to calculate [Formula: see text] correction factors, respectively. For the study of the influence of detector characteristics on their response in small beams; first, the impact of the atomic composition and the mass density of silicon, LiF, and diamond materials were investigated; second, the material, the volume averaging, and the coating effects of detecting material on the detector responses were estimated. Finally, the influence of the size of silicon chip on diode response was investigated. Looking at measurement ratios (uncorrected output factors) compared to the OFMC,w, the PTW 60016, 60017 and Sun Nuclear EDGE diodes systematically over-responded (about +6% for the 5 mm field), whereas the PTW 31014 Pinpoint chamber systematically under-responded (about -12% for the 5 mm field). ORdet measured with the SFD diode and PTW 60003 diamond detectors were in good agreement with OFMC,w except for the 5 mm field size (about -7.5% for the diamond and +3% for the SFD). A good agreement with OFMC,w was obtained with the EBT2 film and micro-LiF dosimeters (deviation less than 1.4% for all fields investigated). [Formula: see text] correction factors for several detectors used in this work have been calculated. The impact of atomic composition on the dosimetric response of detectors was found to be insignificant, unlike the mass density and size of the detecting material. The results obtained with the passive dosimeters showed that they can be used for small beam OF measurements without correction factors. The study of detector response showed that ORdet is depending on the mass density, the volume averaging, and the coating effects of the detecting material. Each effect was quantified for the PTW 60016 and 60017 diodes, the micro-LiF, and the PTW 60003 diamond detectors. None of the active detectors used in this work can be recommended as a reference for small field dosimetry, but an improved diode detector with a smaller silicon chip coated with tissue-equivalent material is anticipated (by simulation) to be a reliable small field dosimetric detector in a nonequilibrium field.

  9. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  10. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    NASA Astrophysics Data System (ADS)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a demonstration of the clinical significance of small-field correction factors. These results will facilitate the safe, accurate and effective use of this treatment modality in the clinic.

  11. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    NASA Astrophysics Data System (ADS)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced the bias of unshielded periods to 0.07 dB for the horizontal polarization (vertical: 0.06 dB). Applying the same model-based correction to shielded periods reduces the bias even more, to -0.03 dB and -0.01 dB, respectively. This indicates that additional attenuation could be caused also by different effects, such as reflection of sidelobes from wet surfaces and other environmental factors. Further, model-based corrections do not capture correctly the nature of WAE, but more likely provide only an empirical correction. This claim is supported by the fact that detailed analysis of particular events reveals that both antenna shielding and model-based correction performance differ substantially from event to event. Further investigation based on direct observation of antenna wetting and other environmental variables needs to be performed to identify more properly the nature of the attenuation bias. Schleiss, M., J. Rieckermann, and A. Berne, 2013: Quantification and modeling of wet-antenna attenuation for commercial microwave links. IEEE Geosci. Remote Sens. Lett., 10.1109/LGRS.2012.2236074.

  12. LA-ICP-MS depth profile analysis of apatite: Protocol and implications for (U-Th)/He thermochronometry

    NASA Astrophysics Data System (ADS)

    Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher

    2013-05-01

    Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.

  13. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  14. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  15. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  16. Ion recombination correction in carbon ion beams.

    PubMed

    Rossomme, S; Hopfgartner, J; Lee, N D; Delor, A; Thomas, R A S; Romano, F; Fukumura, A; Vynckier, S; Palmans, H

    2016-07-01

    In this work, ion recombination is studied as a function of energy and depth in carbon ion beams. Measurements were performed in three different passively scattered carbon ion beams with energies of 62 MeV/n, 135 MeV/n, and 290 MeV/n using various types of plane-parallel ionization chambers. Experimental results were compared with two analytical models for initial recombination. One model is generally used for photon beams and the other model, developed by Jaffé, takes into account the ionization density along the ion track. An investigation was carried out to ascertain the effect on the ion recombination correction with varying ionization chamber orientation with respect to the direction of the ion tracks. The variation of the ion recombination correction factors as a function of depth was studied for a Markus ionization chamber in the 62 MeV/n nonmodulated carbon ion beam. This variation can be related to the depth distribution of linear energy transfer. Results show that the theory for photon beams is not applicable to carbon ion beams. On the other hand, by optimizing the value of the ionization density and the initial mean-square radius, good agreement is found between Jaffé's theory and the experimental results. As predicted by Jaffé's theory, the results confirm that ion recombination corrections strongly decrease with an increasing angle between the ion tracks and the electric field lines. For the Markus ionization chamber, the variation of the ion recombination correction factor with depth was modeled adequately by a sigmoid function, which is approximately constant in the plateau and strongly increasing in the Bragg peak region to values of up to 1.06. Except in the distal edge region, all experimental results are accurately described by Jaffé's theory. Experimental results confirm that ion recombination in the investigated carbon ion beams is dominated by initial recombination. Ion recombination corrections are found to be significant and cannot be neglected for reference dosimetry and for the determination of depth dose curves in carbon ion beams.

  17. On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation

    NASA Astrophysics Data System (ADS)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-02-01

    Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.

  18. The relationship between prior suicidal behavior and mortality among individuals in community corrections.

    PubMed

    Clark, C Brendan; Waesche, Matthew C; Hendricks, Peter S; McCullumsmith, Cheryl B; Redmond, Nicole; Katiyar, Nandan; Lawler, Robert Marsh; Cropsey, Karen L

    2013-01-01

    Individuals under community corrections have multiple risk factors for mortality including exposure to a criminal environment, drug use, social stress, and a lack of medical care that predispose them to accidents, homicides, medical morbidities, and suicide. The literature suggests that prior suicidal behavior may be a particularly potent risk factor for mortality among individuals in the criminal justice system. This study looked to extend the link between history of a suicide attempt and future mortality in a community corrections population. Using an archival dataset (N = 18,260) collected from 2002 to 2007 of individuals being monitored under community corrections supervision for an average of 217 days (SD = 268), we examined the association between past history of a suicide attempt and mortality. A Cox Proportional Hazard Model controlling for age, race, gender, and substance dependence indicated that past history of a suicide attempt was independently associated with time to mortality, and demonstrated the second greatest effect after gender. These data suggest the need for a greater focus on screening and preventive services, particularly for individuals with a history of suicidal behavior, so as to reduce the risk of mortality in community corrections populations.

  19. C5 nerve palsy after posterior reconstruction surgery: predictive risk factors of the incidence and critical range of correction for kyphosis.

    PubMed

    Kurakawa, Takuto; Miyamoto, Hiroshi; Kaneyama, Shuichi; Sumi, Masatoshi; Uno, Koki

    2016-07-01

    It has been reported that the incidence of post-operative segmental nerve palsy, such as C5 palsy, is higher in posterior reconstruction surgery than in conventional laminoplasty. Correction of kyphosis may be related to such a complication. The aim of this study was to elucidate the risk factors of the incidence of post-operative C5 palsy, and the critical range of sagittal realignment in posterior instrumentation surgery. Eighty-eight patients (mean age 64.0 years) were involved. The types of the disease were; 33 spondylosis with kyphosis, 27 rheumatoid arthritis, 17 athetoid cerebral palsy and 11 others. The patients were divided into two groups; Group P: patients with post-operative C5 palsy, and Group NP: patients without C5 palsy. The correction angle of kyphosis, and pre-operative diameter of C4/5 foramen on CT were evaluated between the two groups. Multivariate logistic regression analysis was used to determine the critical range of realignment and the risk factors affecting the incidence of post-operative C5 palsy. Seventeen (19.3 %) of the 88 patients developed C5 palsy. The correction angle of kyphosis in Group P (15.7°) was significantly larger than that in Group NP (4.5°). In Group P, pre-operative diameters of intervertebral foramen at C4/5 (3.2 mm) were significantly smaller than those in Group NP (4.1 mm). The multivariate analysis demonstrated that the risk factors were the correction angle and pre-operative diameter of the C4/5 intervertebral foramen. The logistic regression model showed a correction angle exceeding 20° was critical for developing the palsy when C4/5 foraminal diameter reaches 4.1 mm, and there is a higher risk when the C4/5 foraminal diameter is less than 2.7 mm regardless of any correction. This study has indicated the risk factors of post-operative C5 palsy and the critical range of realignment of the cervical spine after posterior instrumented surgery.

  20. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  1. Study of the Influence of Age in 18F-FDG PET Images Using a Data-Driven Approach and Its Evaluation in Alzheimer's Disease.

    PubMed

    Jiang, Jiehui; Sun, Yiwu; Zhou, Hucheng; Li, Shaoping; Huang, Zhemin; Wu, Ping; Shi, Kuangyu; Zuo, Chuantao; Neuroimaging Initiative, Alzheimer's Disease

    2018-01-01

    18 F-FDG PET scan is one of the most frequently used neural imaging scans. However, the influence of age has proven to be the greatest interfering factor for many clinical dementia diagnoses when analyzing 18 F-FDG PET images, since radiologists encounter difficulties when deciding whether the abnormalities in specific regions correlate with normal aging, disease, or both. In the present paper, the authors aimed to define specific brain regions and determine an age-correction mathematical model. A data-driven approach was used based on 255 healthy subjects. The inferior frontal gyrus, the left medial part and the left medial orbital part of superior frontal gyrus, the right insula, the left anterior cingulate, the left median cingulate, and paracingulate gyri, and bilateral superior temporal gyri were found to have a strong negative correlation with age. For evaluation, an age-correction model was applied to 262 healthy subjects and 50 AD subjects selected from the ADNI database, and partial correlations between SUVR mean and three clinical results were carried out before and after age correction. All correlation coefficients were significantly improved after the age correction. The proposed model was effective in the age correction of both healthy and AD subjects.

  2. On Choosing a Rational Flight Trajectory to the Moon

    NASA Astrophysics Data System (ADS)

    Gordienko, E. S.; Khudorozhkov, P. A.

    2017-12-01

    The algorithm for choosing a trajectory of spacecraft flight to the Moon is discussed. The characteristic velocity values needed for correcting the flight trajectory and a braking maneuver are estimated using the Monte Carlo method. The profile of insertion and flight to a near-circular polar orbit with an altitude of 100 km of an artificial lunar satellite (ALS) is given. The case of two corrections applied during the flight and braking phases is considered. The flight to an ALS orbit is modeled in the geocentric geoequatorial nonrotating coordinate system with the influence of perturbations from the Earth, the Sun, and the Moon factored in. The characteristic correction costs corresponding to corrections performed at different time points are examined. Insertion phase errors, the errors of performing the needed corrections, and the errors of determining the flight trajectory parameters are taken into account.

  3. Distribution of Attenuation Factor Beneath the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Hashimoto, M.

    2001-12-01

    In this research, we tried to estimate the distribution of attenuation factor of seismic wave, which is closely related to the above-mentioned inelastic parameters. Here the velocity records of events from the Freesia network and the J-array network were used. The events were selected based on the following criteria: (a) events with JMA magnitudes from 3.8 to 5.0 and hypocentral distance from 20km to 200km, (b) events with JMA magnitudes from 5.1 to 6.8 and hypocentral distance from 200km to 10_?, (c) Depth of all events is greater than 30km with S/N ratio greater than 2. After correcting the instrument response, P-wave spectra were estimated. Following Boatwright (1991), the observed spectra were modeled by the theoretical spectra by assuming the following relation; Aij(f) = Si(f) Pij(f) Cj(f). Brune's model (1970) was assumed for the source model. Aij(f), Si(f), Pij(f), and Cj(f) are defined as observed spectrum, source spectrum, propagation effect, and site effect, respectively. Frequency dependence of attenuation factor was not assumed here. The global standard velocity model (AK135) is used for ray tracing. Ellipticity corrections and station elevation corrections are also done. The block sizes are 50km by 50km laterally and increase vertically. As the results of analysis, the attenuation structure beneath Japanese Islands up to the depth of 180km was reconstructed with relatively good resolution. The low Q distribution is clearly seen in central Hokkaido, western Hokkaido, Tohoku region, Hida region, Izu region, and southern Kyushu. The relatively sharp decrease in Q associated with asthenosphere can be seen below the depth of 70km.

  4. Long-term correction of canine hemophilia B by gene transfer of blood coagulation factor IX mediated by adeno-associated viral vector.

    PubMed

    Herzog, R W; Yang, E Y; Couto, L B; Hagstrom, J N; Elwell, D; Fields, P A; Burton, M; Bellinger, D A; Read, M S; Brinkhous, K M; Podsakoff, G M; Nichols, T C; Kurtzman, G J; High, K A

    1999-01-01

    Hemophilia B is a severe X-linked bleeding diathesis caused by the absence of functional blood coagulation factor IX, and is an excellent candidate for treatment of a genetic disease by gene therapy. Using an adeno-associated viral vector, we demonstrate sustained expression (>17 months) of factor IX in a large-animal model at levels that would have a therapeutic effect in humans (up to 70 ng/ml, adequate to achieve phenotypic correction, in an animal injected with 8.5x10(12) vector particles/kg). The five hemophilia B dogs treated showed stable, vector dose-dependent partial correction of the whole blood clotting time and, at higher doses, of the activated partial thromboplastin time. In contrast to other viral gene delivery systems, this minimally invasive procedure, consisting of a series of percutaneous intramuscular injections at a single timepoint, was not associated with local or systemic toxicity. Efficient gene transfer to muscle was shown by immunofluorescence staining and DNA analysis of biopsied tissue. Immune responses against factor IX were either absent or transient. These data provide strong support for the feasibility of the approach for therapy of human subjects.

  5. Coarse-grained modeling of polyethylene melts: Effect on dynamics

    DOE PAGES

    Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya; ...

    2017-05-23

    The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less

  6. Coarse-grained modeling of polyethylene melts: Effect on dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya

    The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less

  7. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections.

    PubMed

    Junk, J; Ulber, B; Vidal, S; Eickermann, M

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  8. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections

    NASA Astrophysics Data System (ADS)

    Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  9. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  10. The Accuracy and Correction of Fuel Consumption from Controller Area Network Broadcast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeffrey D; Wood, Eric W

    Fuel consumption (FC) has always been an important factor in vehicle cost. With the advent of electronically controlled engines, the controller area network (CAN) broadcasts information about engine and vehicle performance, including fuel use. However, the accuracy of the FC estimates is uncertain. In this study, the researchers first compared CAN-broadcasted FC against physically measured fuel use for three different types of trucks, which revealed the inaccuracies of CAN-broadcast fueling estimates. To match precise gravimetric fuel-scale measurements, polynomial models were developed to correct the CAN-broadcasted FC. Lastly, the robustness testing of the correction models was performed. The training cycles inmore » this section included a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. The mean relative differences were reduced noticeably.« less

  11. On the impact of power corrections in the prediction of B → K *μ+μ- observables

    NASA Astrophysics Data System (ADS)

    Descotes-Genon, Sébastien; Hofer, Lars; Matias, Joaquim; Virto, Javier

    2014-12-01

    The recent LHCb angular analysis of the exclusive decay B → K * μ + μ - has indicated significant deviations from the Standard Model expectations. Accurate predictions can be achieved at large K *-meson recoil for an optimised set of observables designed to have no sensitivity to hadronic input in the heavy-quark limit at leading order in α s . However, hadronic uncertainties reappear through non-perturbative ΛQCD /m b power corrections, which must be assessed precisely. In the framework of QCD factorisation we present a systematic method to include factorisable power corrections and point out that their impact on angular observables depends on the scheme chosen to define the soft form factors. Associated uncertainties are found to be under control, contrary to earlier claims in the literature. We also discuss the impact of possible non-factorisable power corrections, including an estimate of charm-loop effects. We provide results for angular observables at large recoil for two different sets of inputs for the form factors, spelling out the different sources of theoretical uncertainties. Finally, we comment on a recent proposal to explain the anomaly in B → K * μ + μ - observables through charm-resonance effects, and we propose strategies to test this proposal identifying observables and kinematic regions where either the charm-loop model can be disentangled from New Physics effects or the two options leave different imprints.

  12. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; Zhang, Lei; Kang, Qinjun

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  13. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE PAGES

    Chen, Li; Zhang, Lei; Kang, Qinjun; ...

    2015-01-28

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  14. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: permeability and diffusivity

    PubMed Central

    Chen, Li; Zhang, Lei; Kang, Qinjun; Viswanathan, Hari S.; Yao, Jun; Tao, Wenquan

    2015-01-01

    Porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsic permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. For the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed. PMID:25627247

  15. Propulsion of a fin whale (Balaenoptera physalus): why the fin whale is a fast swimmer.

    PubMed

    Bose, N; Lien, J

    1989-07-22

    Measurements of an immature fin whale (Balaenoptera physalus), which died as a result of entrapment in fishing gear near Frenchmans Cove, Newfoundland (47 degrees 9' N, 55 degrees 25' W), were made to obtain estimates of volume and surface area of the animal. Detailed measurements of the flukes, both planform and sections, were also obtained. A strip theory was developed to calculate the hydrodynamic performance of the whale's flukes as an oscillating propeller. This method is based on linear, two-dimensional, small-amplitude, unsteady hydrofoil theory with correction factors used to account for the effects of finite span and finite amplitude motion. These correction factors were developed from theoretical results of large-amplitude heaving motion and unsteady lifting-surface theory. A model that makes an estimate of the effects of viscous flow on propeller performance was superimposed on the potential-flow results. This model estimates the drag of the hydrofoil sections by assuming that the drag is similar to that of a hydrofoil section in steady flow. The performance characteristics of the flukes of the fin whale were estimated by using this method. The effects of the different correction factors, and of the frictional drag of the fluke sections, are emphasized. Frictional effects in particular were found to reduce the hydrodynamic efficiency of the flukes significantly. The results are discussed and compared with the known characteristics of fin-whale swimming.

  16. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction

    NASA Astrophysics Data System (ADS)

    Yu, Wang

    2017-10-01

    Current reputation models lack the research on online auction trading completely so they cannot entirely reflect the reputation status of users and may cause problems on operability. To evaluate the user trust in online auction correctly, a trust computing model based on multiple influencing factors is established. It aims at overcoming the efficiency of current trust computing methods and the limitations of traditional theoretical trust models. The improved model comprehensively considers the trust degree evaluation factors of three types of participants according to different participation modes of online auctioneers, to improve the accuracy, effectiveness and robustness of the trust degree. The experiments test the efficiency and the performance of our model under different scale of malicious user, under environment like eBay and Sporas model. The experimental results analysis show the model proposed in this paper makes up the deficiency of existing model and it also has better feasibility.

  17. Systematics of first 2+ state g factors around mass 80

    NASA Astrophysics Data System (ADS)

    Mertzimekis, T. J.; Stuchbery, A. E.; Benczer-Koller, N.; Taylor, M. J.

    2003-11-01

    The systematics of the first 2+ state g factors in the mass 80 region are investigated in terms of an IBM-II analysis, a pairing-corrected geometrical model, and a shell-model approach. Subshell closure effects at N=38 and overall trends were examined using IBM-II. A large-space shell-model calculation was successful in describing the behavior for N=48 and N=50 nuclei, where single-particle features are prominent. A schematic truncated-space calculation was applied to the lighter isotopes. The variations of the effective boson g factors are discussed in connection with the role of F -spin breaking, and comparisons are made between the mass 80 and mass 180 regions.

  18. Spectroscopic Factors from the Single Neutron Pickup Reaction ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Ball, G. C.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Faestermann, T.; Finlay, P.; Green, K. L.; Hertenberger, R.; Krücken, R.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wirth, H.-F.; Wong, J.

    2009-10-01

    A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections^[1]. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed &+circ; decay of ^62Ga will be discussed.^[1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).

  19. Monte Carlo study of si diode response in electron beams.

    PubMed

    Wang, Lilie L W; Rogers, David W O

    2007-05-01

    Silicon semiconductor diodes measure almost the same depth-dose distributions in both photon and electron beams as those measured by ion chambers. A recent study in ion chamber dosimetry has suggested that the wall correction factor for a parallel-plate ion chamber in electron beams changes with depth by as much as 6%. To investigate diode detector response with respect to depth, a silicon diode model is constructed and the water/silicon dose ratio at various depths in electron beams is calculated using EGSnrc. The results indicate that, for this particular diode model, the diode response per unit water dose (or water/diode dose ratio) in both 6 and 18 MeV electron beams is flat within 2% versus depth, from near the phantom surface to the depth of R50 (with calculation uncertainty <0.3%). This suggests that there must be some other correction factors for ion chambers that counter-balance the large wall correction factor at depth in electron beams. In addition, the beam quality and field-size dependence of the diode model are also calculated. The results show that the water/diode dose ratio remains constant within 2% over the electron energy range from 6 to 18 MeV. The water/diode dose ratio does not depend on field size as long as the incident electron beam is broad and the electron energy is high. However, for a very small beam size (1 X 1 cm(2)) and low electron energy (6 MeV), the water/diode dose ratio may decrease by more than 2% compared to that of a broad beam.

  20. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  1. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  2. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  3. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  4. New model for burnout prediction in channels of various cross-section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.

    1995-09-01

    The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.

  5. SU-F-T-69: Correction Model of NIPAM Gel and Presage for Electron and Proton PDD Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, C; Lin, C; Tu, P

    Purpose: The current standard equipment for proton PDD measurement is multilayer-parallel-ion-chamber. Disadvantage of multilayer-parallel-ion-chamber is expensive and complexity manipulation. NIPAM-gel and Presage are options for PDD measurement. Due to different stopping power, the result of NIPAM-gel and Presage need to be corrected. This study aims to create a correction model for NIPAM-gel and Presage PDD measurement. Methods: Standard water based PDD profiles of electron 6MeV, 12MeV, and proton 90MeV were acquired. Electron PDD profile after 1cm thickness of NIPAM-gel added on the top of water was measured. Electron PDD profile with extra 1cm thickness of solid water, PTW RW3, wasmore » measured. The distance shift among standard PDD, NIPAM-gel PDD, and solid water PDD at R50% was compared and water equivalent thickness correction factor (WET) was calculated. Similar process was repeated. WETs for electron with Presage, proton with NIPAM-gel, and proton with Presage were calculated. PDD profiles of electron and proton with NIPAM-gel and Presage columns were corrected with each WET. The corrected profiles were compared with standard profiles. Results: WET for electron 12MeV with NIPAM-gel was 1.135, and 1.034 for electron 12Mev with Presage. After correction, PDD profile matched to the standard profile at the fall-off range well. The difference at R50% was 0.26mm shallower and 0.39mm deeper. The same WET was used to correct electron 6MeV profile. Energy independence of electron WET was observed. The difference at R50% was 0.17mm deeper for NIPAM-gel and 0.54mm deeper for Presage. WET for proton 90MeV with NIPAM-gel was 1.056. The difference at R50% was 0.37 deeper. Quenching effect at Bragg peak was revealed. The underestimated dose percentage at Bragg peak was 27%. Conclusion: This correction model can be used to modify PDD profile with depth error within 1mm. With this correction model, NIPAM-gel and Presage can be practical at PDD profile measurement.« less

  6. Patient-specific Distraction Regimen to Avoid Growth-rod Failure.

    PubMed

    Agarwal, Aakash; Jayaswal, Arvind; Goel, Vijay K; Agarwal, Anand K

    2018-02-15

    A finite element study to establish the relationship between patient's curve flexibility (determined using curve correction under gravity) in juvenile idiopathic scoliosis and the required distraction frequency to avoid growth rod fracture, as a function of time. To perform a parametric analysis using a juvenile scoliotic spine model (single mid-thoracic curve with the apex at the eighth thoracic vertebra) and establish the relationship between curve flexibility (determined using curve correction under gravity) and the distraction interval that allows a higher factor of safety for the growth rods. Previous studies have shown that frequent distraction with smaller magnitude of distractions are less likely to result in rod failure. However there has not been any methodology or a chart provided to apply this knowledge on to the individual patients that undergo the treatment. This study aims to fill in that gap. The parametric study was performed by varying the material properties of the disc, hence altering the axial stiffness of the scoliotic spine model. The stresses on the rod were found to increase with increased axial stiffness of the spine, and this resulted in the increase of required optimal frequency to achieve a factor of safety of two for growth rods. A relationship between the percentage correction in Cobb's angle due to gravity alone, and the required distraction interval for limiting the maximum von Mises stress to 255 MPa on the growth rods was established. The distraction interval required to limit the stresses to the selected nominal value reduces with increase in stiffness of the spine. Furthermore, the appropriate distraction interval reduces for each model as the spine becomes stiffer with time (autofusion). This points to the fact the optimal distraction frequency is a time-dependent variable that must be achieved to keep the maximum von Mises stress under the specified factor of safety. The current study demonstrates the possibility of translating fundamental information from finite element modeling to the clinical arena, for mitigating the occurrence of growth rod fracture, that is, establishing a relationship between optimal distraction interval and curve flexibility (determined using curve correction under gravity). N/A.

  7. Genetic Factors Affecting Late-Onset Alzheimer's Disease Susceptibility.

    PubMed

    Rezazadeh, Maryam; Khorrami, Aziz; Yeghaneh, Tarlan; Talebi, Mahnaz; Kiani, Seyed Jalal; Heshmati, Yaser; Gharesouran, Jalal

    2016-03-01

    Alzheimer's disease is considered a progressive brain disease in the older population. Late-onset Alzheimer's disease (LOAD) as a multifactorial dementia has a polygenic inheritance. Age, environment, and lifestyle along with a growing number of genetic factors have been reported as risk factors for LOAD. Our aim was to present results of LOAD association studies that have been done in northwestern Iran, and we also explored possible interactions with apolipoprotein E (APOE) status. We re-evaluated the association of these markers in dominant, recessive, and additive models. In all, 160 LOAD and 163 healthy control subjects of Azeri Turkish ethnicity were studied. The Chi-square test with Yates' correction and Fisher's exact test were used for statistical analysis. A Bonferroni-corrected p value, based on the number of statistical tests, was considered significant. Our results confirmed that chemokine receptor type 2 (CCR2), estrogen receptor 1 (ESR1), toll-like receptor 2 (TLR2), tumor necrosis factor alpha (TNF α), APOE, bridging integrator 1 (BIN1), and phosphatidylinositol-binding clathrin assembly protein (PICALM) are LOAD susceptibility loci in Azeri Turk ancestry populations. Among them, variants of CCR2, ESR1, TNF α, and APOE revealed associations in three different genetic models. After adjusting for APOE, the association (both allelic and genotypic) with CCR2, BIN1, and ESRα (PvuII) was evident only among subjects without the APOE ε4, whereas the association with CCR5, without Bonferroni correction, was significant only among subjects carrying the APOE ε4 allele. This result is an evidence of a synergistic and antagonistic effect of APOE on variant associations with LOAD.

  8. Implications of PSR J0737-3039B for the Galactic NS-NS binary merger rate

    NASA Astrophysics Data System (ADS)

    Kim, Chunglee; Perera, Benetge Bhakthi Pranama; McLaughlin, Maura A.

    2015-03-01

    The Double Pulsar (PSR J0737-3039) is the only neutron star-neutron star (NS-NS) binary in which both NSs have been detectable as radio pulsars. The Double Pulsar has been assumed to dominate the Galactic NS-NS binary merger rate R_g among all known systems, solely based on the properties of the first-born, recycled pulsar (PSR J0737-3039A, or A) with an assumption for the beaming correction factor of 6. In this work, we carefully correct observational biases for the second-born, non-recycled pulsar (PSR J0737-0737B, or B) and estimate the contribution from the Double Pulsar on R_g using constraints available from both A and B. Observational constraints from the B pulsar favour a small beaming correction factor for A (˜2), which is consistent with a bipolar model. Considering known NS-NS binaries with the best observational constraints, including both A and B, we obtain R_g=21_{-14}^{+28} Myr-1 at 95 per cent confidence from our reference model. We expect the detection rate of gravitational waves from NS-NS inspirals for the advanced ground-based gravitational-wave detectors is to be 8^{+10}_{-5} yr-1 at 95 per cent confidence. Within several years, gravitational-wave detections relevant to NS-NS inspirals will provide us useful information to improve pulsar population models.

  9. Factors Leading to Persistent Postsurgical Pain in Adolescents Undergoing Spinal Fusion: An Integrative Literature Review.

    PubMed

    Perry, Mallory; Starkweather, Angela; Baumbauer, Kyle; Young, Erin

    Adolescent idiopathic scoliosis (AIS) is the most common spinal deformity among children and adolescents and the most frequent reason for corrective spinal fusion (SF). Of the children and adolescents who undergo SF, a significant number will experience persistent postoperative pain (PPP). This integrative literature review was conducted to identify and synthesize perioperative factors that may contribute to risk of developing PPP. Articles which addressed PPP within the last 10years and primary research on postoperative pain outcomes in adolescents after SF were selected for review. 15 articles which met eligibility criteria were included. Preoperative pain intensity was the most significant factor identified in the development of PPP and increased postoperative pain. Social function and psychological factors also have role in the development of PPP. There were no theoretical models or frameworks for evaluating PPP incidence in adolescent with AIS after SF. Perioperative factors such as, preoperative pain, correction magnitude, pain coping, anxiety and social functioning are vital to understanding a child's risk of PPP following SF. There is a need for theoretically-based studies to assess PPP among children and adolescents with AIS after SF surgery. The Biobehavioral Pain Network (BPN) model was proposed, to encompass biological, social and psychological domains which may be responsible for incidence of PPP in children undergoing SF. Such a model can be used to systematically develop and evaluate personalized postoperative pain management strategies for this patient population. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Assessment of noise exposure for basketball sports referees.

    PubMed

    Masullo, Massimiliano; Lenzuni, Paolo; Maffei, Luigi; Nataletti, Pietro; Ciaburro, Giuseppe; Annesi, Diego; Moschetto, Antonio

    2016-01-01

    Dosimetric measurements carried out on basketball referees have shown that whistles not only generate very high peak sound pressure levels, but also play a relevant role in determining the overall exposure to noise of the exposed subjects. Because of the peculiar geometry determined by the mutual positions of the whistle, the microphone, and the ear, experimental data cannot be directly compared with existing occupational noise exposure and/or action limits. In this article, an original methodology, which allows experimental results to be reliably compared with the aforementioned limits, is presented. The methodology is based on the use of two correction factors to compensate the effects of the position of the dosimeter microphone (fR) and of the sound source (fS). Correction factors were calculated by means of laboratory measurements for two models of whistles (Fox 40 Classic and Fox 40 Sonik) and for two head orientations (frontal and oblique).Results sho w that for peak sound pressure levels the values of fR and fS, are in the range -8.3 to -4.6 dB and -6.0 to -1.7 dB, respectively. If one considers the Sound Exposure Levels (SEL) of whistle events, the same correction factors are in the range of -8.9 to -5.3 dB and -5.4 to -1.5 dB, respectively. The application of these correction factors shows that the corrected weekly noise exposure level for referees is 80.6 dB(A), which is slightly in excess of the lower action limit of the 2003/10/EC directive, and a few dB below the Recommended Exposure Limit (REL) proposed by the National Institute for Occupational Safety and Health (NIOSH). The corrected largest peak sound pressure level is 134.7 dB(C) which is comparable to the lower action limit of the 2003/10/EC directive, but again substantially lower than the ceiling limit of 140 dB(A) set by NIOSH.

  11. NNLO QCD corrections to production of a spin-2 particle with nonuniversal couplings in the Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Banerjee, Pulak; Dhani, Prasanna K.; Kumar, M. C.; Mathews, Prakash; Ravindran, V.

    2018-05-01

    We study the phenomenological impact of the interaction of spin-2 fields with those of the Standard Model in a model independent framework up to next-to-next-to-leading order in perturbative quantum chromodynamics. We use the invariant mass distribution of the pair of leptons produced at the Large Hadron Collider to demonstrate this. A minimal scenario where the spin-2 fields couple to two gauge invariant operators with different coupling strengths has been considered. These operators not being conserved show very different ultraviolet behavior increasing the searches options of spin-2 particles at the colliders. We find that our results using the higher order quantum corrections stabilize the predictions with respect to renormalization and factorization scales. We also find that corrections are appreciable which need to be taken into account in such searches at the colliders.

  12. Comparison of Atmospheric Parameters Derived from In-Situ and Hyper-/Multispectral Remote Sensing Data of Beautiful Bavarian Lakes

    NASA Astrophysics Data System (ADS)

    Riedel, S.; Gege, P.; Schneider, M.; Pfug, B.; Oppelt, N.

    2016-08-01

    Atmospheric correction is a critical step and can be a limiting factor in the extraction of aquatic ecosystem parameters from remote sensing data of coastal and lake waters. Atmospheric correction models commonly in use for open ocean water and land surfaces can lead to large errors when applied to hyperspectral images taken from satellite or aircraft. The main problems arise from uncertainties in aerosol parameters and neglecting the adjacency effect, which originates from multiple scattering of upwelling radiance from the surrounding land. To better understand the challenges for developing an atmospheric correction model suitable for lakes, we compare atmospheric parameters derived from Sentinel- 2A and airborne hyperspectral data (HySpex) of two Bavarian lakes (Klostersee, Lake Starnberg) with in-situ measurements performed with RAMSES and Ibsen spectrometer systems and a Microtops sun photometer.

  13. Inhaler technique maintenance: gaining an understanding from the patient's perspective.

    PubMed

    Ovchinikova, Ludmila; Smith, Lorraine; Bosnic-Anticevich, Sinthia

    2011-08-01

    The aim of this study was to determine the patient-, education-, and device-related factors that predict inhaler technique maintenance. Thirty-one community pharmacists were trained to deliver inhaler technique education to people with asthma. Pharmacists evaluated (based on published checklists), and where appropriate, delivered inhaler technique education to patients (participants) in the community pharmacy at baseline (Visit 1) and 1 month later (Visit 2). Data were collected on participant demographics, asthma history, current asthma control, history of inhaler technique education, and a range of psychosocial aspects of disease management (including adherence to medication, motivation for correct technique, beliefs regarding the importance of maintaining correct technique, and necessity and concern beliefs regarding preventer therapy). Stepwise backward logistic regression was used to identify the predictors of inhaler technique maintenance at 1 month. In total 145 and 127 participants completed Visits 1 and 2, respectively. At baseline, 17% of patients (n = 24) demonstrated correct technique (score 11/11) which increased to 100% (n = 139) after remedial education by pharmacists. At follow-up, 61% (n = 77) of patients demonstrated correct technique. The predictors of inhaler technique maintenance based on the logistic regression model (X(2) (3, N = 125) = 16.22, p = .001) were use of a dry powder inhaler over a pressurized metered-dose inhaler (OR 2.6), having better asthma control at baseline (OR 2.3), and being more motivated to practice correct inhaler technique (OR 1.2). Contrary to what is typically recommended in previous research, correct inhaler technique maintenance may involve more than repetition of instructions. This study found that past technique education factors had no bearing on technique maintenance, whereas patient psychosocial factors (motivation) did.

  14. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy.

    PubMed

    Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C

    2010-02-01

    To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.

  15. Study on SOC wavelet analysis for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.

  16. Study of the Factors Affecting the Mathematics Achievement of Turkish Students According to Data from the Programme for International Student Assessment (PISA) 2012

    ERIC Educational Resources Information Center

    Güzeller, Cem Oktay; Eser, Mehmet Taha; Aksu, Gökhan

    2016-01-01

    This study attempts to determine the factors affecting the mathematics achievement of students in Turkey based on data from the Programme for International Student Assessment 2012 and the correct classification ratio of the established model. The study used mathematics achievement as a dependent variable while sex, having a study room, preparation…

  17. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  18. Vitamin D and ferritin correlation with chronic neck pain using standard statistics and a novel artificial neural network prediction model.

    PubMed

    Eloqayli, Haytham; Al-Yousef, Ali; Jaradat, Raid

    2018-02-15

    Despite the high prevalence of chronic neck pain, there is limited consensus about the primary etiology, risk factors, diagnostic criteria and therapeutic outcome. Here, we aimed to determine if Ferritin and Vitamin D are modifiable risk factors with chronic neck pain using slandered statistics and artificial intelligence neural network (ANN). Fifty-four patients with chronic neck pain treated between February 2016 and August 2016 in King Abdullah University Hospital and 54 patients age matched controls undergoing outpatient or minor procedures were enrolled. Patients and control demographic parameters, height, weight and single measurement of serum vitamin D, Vitamin B12, ferritin, calcium, phosphorus, zinc were obtained. An ANN prediction model was developed. The statistical analysis reveals that patients with chronic neck pain have significantly lower serum Vitamin D and Ferritin (p-value <.05). 90% of patients with chronic neck pain were females. Multilayer Feed Forward Neural Network with Back Propagation(MFFNN) prediction model were developed and designed based on vitamin D and ferritin as input variables and CNP as output. The ANN model output results show that, 92 out of 108 samples were correctly classified with 85% classification accuracy. Although Iron and vitamin D deficiency cannot be isolated as the sole risk factors of chronic neck pain, they should be considered as two modifiable risk. The high prevalence of chronic neck pain, hypovitaminosis D and low ferritin amongst women is of concern. Bioinformatics predictions with artificial neural network can be of future benefit in classification and prediction models for chronic neck pain. We hope this initial work will encourage a future larger cohort study addressing vitamin D and iron correction as modifiable factors and the application of artificial intelligence models in clinical practice.

  19. Personalized pseudophakic model

    NASA Astrophysics Data System (ADS)

    Ribeiro, F.; Castanheira-Dinis, A.; Dias, J. M.

    2014-08-01

    With the aim of taking into account all optical aberrations, a personalized pseudophakic optical model was designed for refractive evaluation using ray tracing software. Starting with a generic model, all clinically measurable data were replaced by personalized measurements. Data from corneal anterior and posterior surfaces were imported from a grid of elevation data obtained by topography, and a formula for the calculation of the intraocular lens (IOL) position was developed based on the lens equator. For the assessment of refractive error, a merit function minimized by the approximation of the Modulation Transfer Function values to diffraction limit values on the frequencies corresponding up to the discrimination limits of the human eye, weighted depending on the human contrast sensitivity function, was built. The model was tested on the refractive evaluation of 50 pseudophakic eyes. The developed model shows good correlation with subjective evaluation of a pseudophakic population, having the added advantage of being independent of corrective factors, allowing it to be immediately adaptable to new technological developments. In conclusion, this personalized model, which uses individual biometric values, allows for a precise refractive assessment and is a valuable tool for an accurate IOL power calculation, including in conditions to which population averages and the commonly used regression correction factors do not apply, thus achieving the goal of being both personalized and universally applicable.

  20. Large Eddy Simulation of a Film Cooling Technique with a Plenum

    NASA Astrophysics Data System (ADS)

    Dharmarathne, Suranga; Sridhar, Narendran; Araya, Guillermo; Castillo, Luciano; Parameswaran, Sivapathasund

    2012-11-01

    Factors that affect the film cooling performance have been categorized into three main groups: (i) coolant & mainstream conditions, (ii) hole geometry & configuration, and (iii) airfoil geometry Bogard et al. (2006). The present study focuses on the second group of factors, namely, the modeling of coolant hole and the plenum. It is required to simulate correct physics of the problem to achieve more realistic numerical results. In this regard, modeling of cooling jet hole and the plenum chamber is highly important Iourokina et al. (2006). Substitution of artificial boundary conditions instead of correct plenum design would yield unrealistic results Iourokina et al. (2006). This study attempts to model film cooling technique with a plenum using a Large Eddy Simulation.Incompressible coolant jet ejects to the surface of the plate at an angle of 30° where it meets compressible turbulent boundary layer which simulates the turbine inflow conditions. Dynamic multi-scale approach Araya (2011) is introduced to prescribe turbulent inflow conditions. Simulations are carried out for two different blowing ratios and film cooling effectiveness is calculated for both cases. Results obtained from LES will be compared with experimental results.

  1. Measurement accuracy of FBG used as a surface-bonded strain sensor installed by adhesive.

    PubMed

    Xue, Guangzhe; Fang, Xinqiu; Hu, Xiukun; Gong, Libin

    2018-04-10

    Material and dimensional properties of surface-bonded fiber Bragg gratings (FBGs) can distort strain measurement, thereby lowering the measurement accuracy. To accurately assess measurement precision and correct obtained strain, a new model, considering reinforcement effects on adhesive and measured object, is proposed in this study, which is verified to be accurate enough by the numerical method. Meanwhile, a theoretical strain correction factor is obtained, which is demonstrated to be significantly sensitive to recoating material and bonding length, as suggested by numerical and experimental results. It is also concluded that a short grating length as well as a thin but large-area (preferably covering the whole FBG) adhesive can enhance the correction precision.

  2. Advanced corrections for InSAR using GPS and numerical weather models

    NASA Astrophysics Data System (ADS)

    Foster, J. H.; Cossu, F.; Amelung, F.; Businger, S.; Cherubini, T.

    2016-12-01

    The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting Interferometric Synthetic Aperture Radar's (InSAR) potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We present preliminary results from an investigation into the application of GPS and numerical weather models for generating tropospheric correction fields. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric model covering the Big Island of Hawaii and an even higher, 300 m resolution grid over Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate information on atmospheric heterogeneity from the GPS data into the models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. This work will produce best-practice recommendations for the use of weather models for InSAR correction, and inform efforts to design a global strategy for the NISAR mission, for both low-latency and definitive atmospheric correction products.

  3. Decision Making Configurations: An Alternative to the Centralization/Decentralization Conceptualization.

    ERIC Educational Resources Information Center

    Cullen, John B.; Perrewe, Pamela L.

    1981-01-01

    Used factors identified in the literature as predictors of centralization/decentralization as potential discriminating variables among several decision making configurations in university affiliated professional schools. The model developed from multiple discriminant analysis had reasonable success in classifying correctly only the decentralized…

  4. Measuring and modeling the interaction among reward size, delay to reward, and satiation level on motivation in monkeys.

    PubMed

    Minamimoto, Takafumi; La Camera, Giancarlo; Richmond, Barry J

    2009-01-01

    Motivation is usually inferred from the likelihood or the intensity with which behavior is carried out. It is sensitive to external factors (e.g., the identity, amount, and timing of a rewarding outcome) and internal factors (e.g., hunger or thirst). We trained macaque monkeys to perform a nonchoice instrumental task (a sequential red-green color discrimination) while manipulating two external factors: reward size and delay-to-reward. We also inferred the state of one internal factor, level of satiation, by monitoring the accumulated reward. A visual cue indicated the forthcoming reward size and delay-to-reward in each trial. The fraction of trials completed correctly by the monkeys increased linearly with reward size and was hyperbolically discounted by delay-to-reward duration, relations that are similar to those found in free operant and choice tasks. The fraction of correct trials also decreased progressively as a function of the satiation level. Similar (albeit noiser) relations were obtained for reaction times. The combined effect of reward size, delay-to-reward, and satiation level on the proportion of correct trials is well described as a multiplication of the effects of the single factors when each factor is examined alone. These results provide a quantitative account of the interaction of external and internal factors on instrumental behavior, and allow us to extend the concept of subjective value of a rewarding outcome, usually confined to external factors, to account also for slow changes in the internal drive of the subject.

  5. Measuring and Modeling the Interaction Among Reward Size, Delay to Reward, and Satiation Level on Motivation in Monkeys

    PubMed Central

    Minamimoto, Takafumi; La Camera, Giancarlo; Richmond, Barry J.

    2009-01-01

    Motivation is usually inferred from the likelihood or the intensity with which behavior is carried out. It is sensitive to external factors (e.g., the identity, amount, and timing of a rewarding outcome) and internal factors (e.g., hunger or thirst). We trained macaque monkeys to perform a nonchoice instrumental task (a sequential red-green color discrimination) while manipulating two external factors: reward size and delay-to-reward. We also inferred the state of one internal factor, level of satiation, by monitoring the accumulated reward. A visual cue indicated the forthcoming reward size and delay-to-reward in each trial. The fraction of trials completed correctly by the monkeys increased linearly with reward size and was hyperbolically discounted by delay-to-reward duration, relations that are similar to those found in free operant and choice tasks. The fraction of correct trials also decreased progressively as a function of the satiation level. Similar (albeit noiser) relations were obtained for reaction times. The combined effect of reward size, delay-to-reward, and satiation level on the proportion of correct trials is well described as a multiplication of the effects of the single factors when each factor is examined alone. These results provide a quantitative account of the interaction of external and internal factors on instrumental behavior, and allow us to extend the concept of subjective value of a rewarding outcome, usually confined to external factors, to account also for slow changes in the internal drive of the subject. PMID:18987119

  6. Aspherical-atom modeling of coordination compounds by single-crystal X-ray diffraction allows the correct metal atom to be identified.

    PubMed

    Dittrich, Birger; Wandtke, Claudia M; Meents, Alke; Pröpper, Kevin; Mondal, Kartik Chandra; Samuel, Prinson P; Amin Sk, Nurul; Singh, Amit Pratap; Roesky, Herbert W; Sidhu, Navdeep

    2015-02-02

    Single-crystal X-ray diffraction (XRD) is often considered the gold standard in analytical chemistry, as it allows element identification as well as determination of atom connectivity and the solid-state structure of completely unknown samples. Element assignment is based on the number of electrons of an atom, so that a distinction of neighboring heavier elements in the periodic table by XRD is often difficult. A computationally efficient procedure for aspherical-atom least-squares refinement of conventional diffraction data of organometallic compounds is proposed. The iterative procedure is conceptually similar to Hirshfeld-atom refinement (Acta Crystallogr. Sect. A- 2008, 64, 383-393; IUCrJ. 2014, 1,61-79), but it relies on tabulated invariom scattering factors (Acta Crystallogr. Sect. B- 2013, 69, 91-104) and the Hansen/Coppens multipole model; disordered structures can be handled as well. Five linear-coordinate 3d metal complexes, for which the wrong element is found if standard independent-atom model scattering factors are relied upon, are studied, and it is shown that only aspherical-atom scattering factors allow a reliable assignment. The influence of anomalous dispersion in identifying the correct element is investigated and discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Chapter 37: Population Trends of the Marbled Murrelet Projected From Demographic Analyses

    Treesearch

    Steven B. Beissinger

    1995-01-01

    A demographic model of the Marbled Murrelet is developed to explore likely population trends and factors influencing them. The model was structured to use field data on juvenile ratios, collected near the end of the breeding season and corrected for date of census, to estimate fecundity. Survivorship was estimated for the murrelet based on comparative analyses of...

  8. No association between hyponatremia and rhabdomyolysis in rats.

    PubMed

    Peled, Michael; Dolkart, Oleg; Finn, Talya; Amar, Eyal; Zeltser, David

    2014-10-01

    Rhabdomyolysis is an uncommon complication of hyponatremia, reported previously only in case reports and small retrospective studies, and its underlying mechanism is controversial. Some studies support the hypothesis that the rapid correction of hyponatremia is responsible for rhabdomyolysis, whereas others emphasize the severity of the hyponatremia as a predisposing factor for rhabdomyolysis. To test the association between hyponatremia and rhabdomyolysis and to demonstrate a causal association. Hyponatremia was induced by administration of water and desmopressin acetate in rats during 3 days, followed by its rapid correction, using animal models established for the evaluation of central pontine myelinolysis. The plasma creatine phosphokinase levels, a marker for rhabdomyolysis, were monitored, and hematoxylin and eosin sections of the quadriceps and gastrocnemius muscles were evaluated for signs of rhabdomyolysis. The induction of hyponatremia and its correction were accompanied by the previously reported neurological sequelae, including signs of central pontine myelinolysis. However, no increase in plasma creatine phosphokinase levels was found, and histopathological examination of the quadriceps and gastrocnemius muscles revealed no sign of rhabdomyolysis. The present study, which is the first to test the association between hyponatremia and rhabdomyolysis in an animal model, does not support any causal association between hyponatremia and rhabdomyolysis. Thus, other factors might be necessary for an association between hyponatremia and rhabdomyolysis, such as genetic factors or convulsions that are known to be associated with both hyponatremia and rhabdomyolysis. Further research in this important physiologic and clinical question is needed. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. An Alternative Interpretation of the Relationship between the Inferred Open Solar Flux and the Interplanetary Magnetic Field

    NASA Technical Reports Server (NTRS)

    Riley, Pete

    2007-01-01

    Photospheric observations at the Wilcox Solar Observatory (WSO) represent an uninterrupted data set of 32 years and are therefore unique for modeling variations in the magnetic structure of the corona and inner heliosphere over three solar cycles. For many years, modelers have applied a latitudinal correction factor to these data, believing that it provided a better estimate of the line-of-sight magnetic field. Its application was defended by arguing that the computed open flux matched observations of the interplanetary magnetic field (IMF) significantly better than the original WSO correction factor. However, no physically based argument could be made for its use. In this Letter we explore the implications of using the constant correction factor on the value and variation of the computed open solar flux and its relationship to the measured IMF. We find that it does not match the measured IMF at 1 AU except at and surrounding solar minimum. However, we argue that interplanetary coronal mass ejections (ICMEs) may provide sufficient additional magnetic flux to the extent that a remarkably good match is found between the sum of the computed open flux and inferred ICME flux and the measured flux at 1 AU. If further substantiated, the implications of this interpretation may be significant, including a better understanding of the structure and strength of the coronal field and I N providing constraints for theories of field line transport in the corona, the modulation of galactic cosmic rays, and even possibly terrestrial climate effects.

  10. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  11. Sediment size fractionation and focusing in the equatorial Pacific: Effect on 230Th normalization and paleoflux measurements

    NASA Astrophysics Data System (ADS)

    Lyle, Mitchell; Marcantonio, Franco; Moore, Willard S.; Murray, Richard W.; Huh, Chih-An; Finney, Bruce P.; Murray, David W.; Mix, Alan C.

    2014-07-01

    We use flux, dissolution, and excess 230Th data from the Joint Global Ocean Flux Study and Manganese Nodule Project equatorial Pacific study Site C to assess the extent of sediment focusing in the equatorial Pacific. Measured mass accumulation rates (MAR) from sediment cores were compared to reconstructed MAR by multiplying the particulate rain caught in sediment traps by the 230Th focusing factor and subtracting measured dissolution. CaCO3 MAR is severely overestimated when the 230Th focusing factor correction is large but is estimated correctly when the focusing factor is small. In contrast, Al fluxes in the sediment fine fraction are well matched when the focusing correction is used. Since CaCO3 is primarily a coarse sediment component, we propose that there is significant sorting of fine and coarse sediments during lateral sediment transport by weak currents. Because CaCO3 does not move with 230Th, normalization typically overcorrects the CaCO3 MAR; and because CaCO3 is 80% of the total sediment, 230Th normalization overestimates lateral sediment flux. Fluxes of 230Th in particulate rain caught in sediment traps agree with the water column production-sorption model, except within 500 m of the bottom. Near the bottom, 230Th flux measurements are as much as 3 times higher than model predictions. There is also evidence for lateral near-bottom 230Th transport in the bottom nepheloid layer since 230Th fluxes caught by near-bottom sediment traps are higher than predicted by resuspension of surface sediments alone. Resuspension and nepheloid layer transport under weak currents need to be better understood in order to use 230Th within a quantitative model of lateral sediment transport.

  12. Global analysis of approaches for deriving total water storage changes from GRACE satellites and implications for groundwater storage change estimation

    NASA Astrophysics Data System (ADS)

    Long, D.; Scanlon, B. R.; Longuevergne, L.; Chen, X.

    2015-12-01

    Increasing interest in use of GRACE satellites and a variety of new products to monitor changes in total water storage (TWS) underscores the need to assess the reliability of output from different products. The objective of this study was to assess skills and uncertainties of different approaches for processing GRACE data to restore signal losses caused by spatial filtering based on analysis of 1°×1° grid scale data and basin scale data in 60 river basins globally. Results indicate that scaling factors from six land surface models (LSMs), including four models from GLDAS-1 (Noah 2.7, Mosaic, VIC, and CLM 2.0), CLM 4.0, and WGHM, are similar over most humid, sub-humid, and high-latitude regions but can differ by up to 100% over arid and semi-arid basins and areas with intensive irrigation. Large differences in TWS anomalies from three processing approaches (scaling factor, additive, and multiplicative corrections) were found in arid and semi-arid regions, areas with intensive irrigation, and relatively small basins (e.g., ≤ 200,000 km2). Furthermore, TWS anomaly products from gridded data with CLM4.0 scaling factors and the additive correction approach more closely agree with WGHM output than the multiplicative correction approach. Estimation of groundwater storage changes using GRACE satellites requires caution in selecting an appropriate approach for restoring TWS changes. A priori ground-based data used in forward modeling can provide a powerful tool for explaining the distribution of signal gains or losses caused by low-pass filtering in specific regions of interest and should be very useful for more reliable estimation of groundwater storage changes using GRACE satellites.

  13. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  14. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  15. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  16. New insights into ocean tide loading corrections on tidal gravity data in Canary Islands

    NASA Astrophysics Data System (ADS)

    Arnoso, J.; Benavent, M.; Bos, M. S.; Montesinos, F. G.

    2009-04-01

    The Canary Islands are an interesting area to investigate ocean tides loading effects due to the complex coastline of the islands and the varying bathymetry. We present here the quality of five recent global oceanic tidal models, GOT00.2, GOT4.7, FES2004, TPXO.7.1 and AG2006, by comparing their predicted ocean tide loading values with results from tidal gravity observations made on three islands, Lanzarote, Tenerife and El Hierro, for the four harmonic constituents O1, K1, M2 and S2. In order to improve the accuracy of the loading corrections on the gravity tide measurements, we have used the high resolution regional oceanic model CIAM2 to supplement the global models considered here. This regional model has been obtained by assimilating TOPEX/Poseidon altimetry at crossovers and along-track points and tide gauge observations into a hydrodynamic model. The model has a 5'Ã-5' resolution and covers the area between the coordinates 26°.5N to 30°.0N and 19°.0W to 12°.5W. The gravity tide observing sites have been occupied by three different LaCoste&Romberg (LCR) spring gravimeters during different periods of observation. We considered here the most recent gravity tide observations made with LCR Graviton-EG1194 in El Hierro Island, for a period of 6 months during 2008. In the case of Tenerife and Lanzarote sites we have used observation periods of 6 months and 8 years with LCR-G665 and LCR-G434 gravimeters, respectively. The last two sites have been revisited in order to improve the previous tidal analysis results. Thus, the gravity ocean tide loading corrections, based on the five global ocean tide models supplemented with the regional model CIAM2 allowed us to review the normalization factors (scale factor and phase lag) of both two gravimeters. Also, we investigated the discrepancies of the corrected gravimetric factors with the DDW elastic and inelastic non hydrostatic body tide model (Dehant et al., 1999). The lowest values are found for inelastic model in the case of M2 and O1 waves at three sites. However, the scatter between oceanic models seen at final residual vectors does not indicate clearly if tidal observations are close to elastic or inelastic body tide model. Finally, after computing misfits of gravity tide observations and ocean tide loading calculations the level of agreement between the five global oceanic models is below 0.2 Gal (1 Gal=10-8ms-2), except for the solar harmonic K1, which reaches a large value that reflects the thermal instability at three sites because the period of K1 is very close to that of S1. None of the five global models seems to give results that are clearly better than the other models.

  17. Wellness and illness self-management skills in community corrections.

    PubMed

    Kelly, Patricia J; Ramaswamy, Megha; Chen, Hsiang-Feng; Denny, Donald

    2015-02-01

    Community corrections provide a readjustment venue for re-entry between incarceration and home for inmates in the US corrections system. Our goal was to determine how self-management skills, an important predictor of re-entry success, varied by demographic and risk factors. In this cross-sectional study, we analyzed responses of 675 clients from 57 community corrections programs run by the regional division of the Federal Bureau of Prisons. A self-administered survey collected data on self-management skills, demographics, and risk factors; significant associations were applied in four regression models: the overall self-management score and three self-management subscales: coping skills, goals, and drug use. Over one-quarter (27.2%/146) of participants had a mental health history. White race, no mental health history and high school education were associated with better overall self-management scores; mental health history and drug use in the past year were associated with lower coping scores; female gender and high school education were associated with better self-management goals; female gender was associated with better self-management drug use scores. Self-management programs may need to be individualized for different groups of clients. Lower scores for those with less education suggest an area for targeted, nurse-led interventions.

  18. Baryon octet electromagnetic form factors in a confining NJL model

    NASA Astrophysics Data System (ADS)

    Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloët, Ian C.; Thomas, Anthony W.

    2016-08-01

    Electromagnetic form factors of the baryon octet are studied using a Nambu-Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that rEp < rEΣ+ and | rEn | < | rEΞ0 |, whereas the magnetic radii have a pattern largely consistent with a naive expectation based on the dressed quark masses.

  19. Violators of a child passenger safety law.

    PubMed

    Agran, Phyllis F; Anderson, Craig L; Winn, Diane G

    2004-07-01

    Nonuse of child car safety seats (CSSs) remains significant; in 2000, 47% of occupant fatalities among children <5 years of age involved unrestrained children. Nonusers and part-time users of CSSs represent small proportions of the US population that have not responded to intervention efforts. Our study examined the factors contributing to nonuse or part-time use of CSSs and the effects of exposure to a class for violators of the California Child Passenger Safety (CPS) law. Focus groups (in English and Spanish) were conducted with individuals cited for violation of the law (N = 24). A thematic analysis of notes made by an observer, supplemented by audiotapes of the sessions, was conducted. In addition, a study of the effects of exposure to a violator class on knowledge and correct CSS use was conducted among violators. Certified CPS technicians conducted the classes and interviews. Subjects were parents cited as the driver with a child of 20 to 40 pounds, between 12 and 47 months of age. One hundred subjects recruited from the class were compared with 50 subjects who did not attend a class. Follow-up home interviews, with inspection of CCS use, were conducted 3 months after payment of the fine and completion of all court requirements. Fisher's exact test was used for 2 x 2 tables, because some of the tables had small cell sizes. The Mann-Whitney rank sum test was used for child restraint use, knowledge, and correct use scales, because some of these variables were not normally distributed. Linear and logistic regression models were used to examine the effects of several variables on these parameters. Factors influencing CSS nonuse were 1) lifestyle factors, 2) transportation and trip circumstances, 3) nonparent or nondriver issues, 4) parenting style, 5) child's behavior, and 6) perceived risks of nonuse. Violator subjects were mostly Hispanic and female, with incomes of less than 30,000 dollars per year. Those exposed to the class (citation and education group) scored 1 point higher on a knowledge test and had 1 more item correct on a CSS use instrument than did the group not exposed to the class (citation only group). In the logistic model, the citation and education group scored higher on the 2 items that were corrected by the instructor during the class. Our focus group study of CPS law violators revealed that multiple complex factors influence consistent use of a CSS. The interplay of the particular vehicle, the trip circumstances, and family/parent/child factors affected the use of a CSS at the time of parent citation. Addressing transportation issues and parenting skills in CPS programs is necessary. Among parents who had been ticketed for not restraining their children, exposure to a violator class demonstrated some benefit, compared with a fine alone. Correct CSS use improved most on items corrected by the instructor. Violator classes that include "hands-on" training show promise for improving rates of correct use of CSSs.

  20. How Cultural Assumptions May Affect Teaching, Learning, and Communication in the Nation's Prisons.

    ERIC Educational Resources Information Center

    Franklin, Godfrey; Platt, John S.

    1994-01-01

    A Multicultural Communications model includes factors that influence human interactions and communications among correctional staff and inmates. To be competent cross-cultural, intercultural, or multicultural communicators, educators must be aware of their knowledge of out-group members. (Author/JOW)

  1. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    PubMed

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  2. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  3. Characterization of an in vivo diode dosimetry system for clinical use

    PubMed Central

    Huang, Kai; Bice, William S.; Hidalgo‐Salvatierra, Oscar

    2003-01-01

    An in vivo dosimetry system that uses p‐type semiconductor diodes with buildup caps was characterized for clinical use on accelerators ranging in energy from 4 to 18 MV. The dose per pulse dependence was investigated. This was done by altering the source‐surface distance, field size, and wedge for photons. The off‐axis correction and effect of changing repetition rate were also investigated. A model was developed to fit the measured two‐dimensional diode correction factors. PACS number(s): 87.66.–a, 87.52.–g PMID:12777148

  4. Bias correction for rainrate retrievals from satellite passive microwave sensors

    NASA Technical Reports Server (NTRS)

    Short, David A.

    1990-01-01

    Rainrates retrieved from past and present satellite-borne microwave sensors are affected by a fundamental remote sensing problem. Sensor fields-of-view are typically large enough to encompass substantial rainrate variability, whereas the retrieval algorithms, based on radiative transfer calculations, show a non-linear relationship between rainrate and microwave brightness temperature. Retrieved rainrates are systematically too low. A statistical model of the bias problem shows that bias correction factors depend on the probability distribution of instantaneous rainrate and on the average thickness of the rain layer.

  5. Theoretical interpretation of the Venus 1.05-micron CO2 band and the Venus 0.8189-micron H2O line.

    NASA Technical Reports Server (NTRS)

    Regas, J. L.; Giver, L. P.; Boese, R. W.; Miller, J. H.

    1972-01-01

    The synthetic-spectrum technique was used in the analysis. The synthetic spectra were constructed with a model which takes into account both isotropic scattering and the inhomogeneity in the Venus atmosphere. The Potter-Hansen correction factor was used to correct for anisotropic scattering. The synthetic spectra obtained are, therefore, the first which contain all the essential physics of line formation. The results confirm Potter's conclusion that the Venus cloud tops resemble terrestrial cirrus or stratus clouds in their scattering properties.

  6. Individual risk of cutaneous melanoma in New Zealand: developing a clinical prediction aid.

    PubMed

    Sneyd, Mary Jane; Cameron, Claire; Cox, Brian

    2014-05-22

    New Zealand and Australia have the highest melanoma incidence rates worldwide. In New Zealand, both the incidence and thickness have been increasing. Clinical decisions require accurate risk prediction but a simple list of genetic, phenotypic and behavioural risk factors is inadequate to estimate individual risk as the risk factors for melanoma have complex interactions. In order to offer tailored clinical management strategies, we developed a New Zealand prediction model to estimate individual 5-year absolute risk of melanoma. A population-based case-control study (368 cases and 270 controls) of melanoma risk factors provided estimates of relative risks for fair-skinned New Zealanders aged 20-79 years. Model selection techniques and multivariate logistic regression were used to determine the important predictors. The relative risks for predictors were combined with baseline melanoma incidence rates and non-melanoma mortality rates to calculate individual probabilities of developing melanoma within 5 years. For women, the best model included skin colour, number of moles > =5 mm on the right arm, having a 1st degree relative with large moles, and a personal history of non-melanoma skin cancer (NMSC). The model correctly classified 68% of participants; the C-statistic was 0.74. For men, the best model included age, place of occupation up to age 18 years, number of moles > =5 mm on the right arm, birthplace, and a history of NMSC. The model correctly classified 67% of cases; the C-statistic was 0.71. We have developed the first New Zealand risk prediction model that calculates individual absolute 5-year risk of melanoma. This model will aid physicians to identify individuals at high risk, allowing them to individually target surveillance and other management strategies, and thereby reduce the high melanoma burden in New Zealand.

  7. An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods.

    PubMed

    Frank, Florian; Liu, Chen; Scanziani, Alessio; Alpak, Faruk O; Riviere, Beatrice

    2018-08-01

    We consider an energy-based boundary condition to impose an equilibrium wetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on voxel-set-type computational domains. These domains typically stem from μCT (micro computed tomography) imaging of porous rock and approximate a (on μm scale) smooth domain with a certain resolution. Planar surfaces that are perpendicular to the main axes are naturally approximated by a layer of voxels. However, planar surfaces in any other directions and curved surfaces yield a jagged/topologically rough surface approximation by voxels. For the standard Cahn-Hilliard formulation, where the contact angle between the diffuse interface and the domain boundary (fluid-solid interface/wall) is 90°, jagged surfaces have no impact on the contact angle. However, a prescribed contact angle smaller or larger than 90° on jagged voxel surfaces is amplified. As a remedy, we propose the introduction of surface energy correction factors for each fluid-solid voxel face that counterbalance the difference of the voxel-set surface area with the underlying smooth one. The discretization of the model equations is performed with the discontinuous Galerkin method. However, the presented semi-analytical approach of correcting the surface energy is equally applicable to other direct numerical methods such as finite elements, finite volumes, or finite differences, since the correction factors appear in the strong formulation of the model. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. THE CALCULATION OF BURNABLE POISON CORRECTION FACTORS FOR PWR FRESH FUEL ACTIVE COLLAR MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.

    2012-06-19

    Verification of commercial low enriched uranium light water reactor fuel takes place at the fuel fabrication facility as part of the overall international nuclear safeguards solution to the civilian use of nuclear technology. The fissile mass per unit length is determined nondestructively by active neutron coincidence counting using a neutron collar. A collar comprises four slabs of high density polyethylene that surround the assembly. Three of the slabs contain {sup 3}He filled proportional counters to detect time correlated fission neutrons induced by an AmLi source placed in the fourth slab. Historically, the response of a particular collar design to amore » particular fuel assembly type has been established by careful cross-calibration to experimental absolute calibrations. Traceability exists to sources and materials held at Los Alamos National Laboratory for over 35 years. This simple yet powerful approach has ensured consistency of application. Since the 1980's there has been a steady improvement in fuel performance. The trend has been to higher burn up. This requires the use of both higher initial enrichment and greater concentrations of burnable poisons. The original analytical relationships to correct for varying fuel composition are consequently being challenged because the experimental basis for them made use of fuels of lower enrichment and lower poison content than is in use today and is envisioned for use in the near term. Thus a reassessment of the correction factors is needed. Experimental reassessment is expensive and time consuming given the great variation between fuel assemblies in circulation. Fortunately current modeling methods enable relative response functions to be calculated with high accuracy. Hence modeling provides a more convenient and cost effective means to derive correction factors which are fit for purpose with confidence. In this work we use the Monte Carlo code MCNPX with neutron coincidence tallies to calculate the influence of Gd{sub 2}O{sub 3} burnable poison on the measurement of fresh pressurized water reactor fuel. To empirically determine the response function over the range of historical and future use we have considered enrichments up to 5 wt% {sup 235}U/{sup tot}U and Gd weight fractions of up to 10 % Gd/UO{sub 2}. Parameterized correction factors are presented.« less

  9. Priors Engaged in Long-Latency Responses to Mechanical Perturbations Suggest a Rapid Update in State Estimation

    PubMed Central

    Crevecoeur, Frédéric; Scott, Stephen H.

    2013-01-01

    In every motor task, our brain must handle external forces acting on the body. For example, riding a bike on cobblestones or skating on irregular surface requires us to appropriately respond to external perturbations. In these situations, motor predictions cannot help anticipate the motion of the body induced by external factors, and direct use of delayed sensory feedback will tend to generate instability. Here, we show that to solve this problem the motor system uses a rapid sensory prediction to correct the estimated state of the limb. We used a postural task with mechanical perturbations to address whether sensory predictions were engaged in upper-limb corrective movements. Subjects altered their initial motor response in ∼60 ms, depending on the expected perturbation profile, suggesting the use of an internal model, or prior, in this corrective process. Further, we found trial-to-trial changes in corrective responses indicating a rapid update of these perturbation priors. We used a computational model based on Kalman filtering to show that the response modulation was compatible with a rapid correction of the estimated state engaged in the feedback response. Such a process may allow us to handle external disturbances encountered in virtually every physical activity, which is likely an important feature of skilled motor behaviour. PMID:23966846

  10. When shape matters: Correcting the ICFs to derive the chemical abundances of bipolar and elliptical PNe

    NASA Astrophysics Data System (ADS)

    Gonçalves, Denise R.; Wesson, Roger; Morisset, Cristophe; Barlow, Michael; Ercolano, Barbara

    2012-08-01

    The extraction of chemical abundances of ionised nebulae from a limited spectral range is usually hampered by the lack of emission lines corresponding to certain ionic stages. So far, the missing emission lines have been accounted for by the ionisation correction factors (ICFs), constructed under simplistic assumptions like spherical geometry by using 1-D photoionisation modelling. In this contribution we discuss the results (Gonçalves et al. 2011, in prep.) of our ongoing project to find a new set of ICFs to determine total abundances of N, O, Ne, Ar, and S, with optical spectra, in the case of non-spherical PNe. These results are based on a grid of 3-D photoionisation modelling of round, elliptical and bipolar shaped PNe, spanning the typical PN luminosities, effective temperatures and densities. We show that the additional corrections to the widely used Kingsburgh & Barlow (1994) ICFs are always higher for bipolars than for ellipticals. Moreover, these additional corrections are, for bipolars, up to: 17% for oxygen, 33% for nitrogen, 40% for neon, 28% for argon and 50% for sulphur. Finally, on top of the fact that corrections change greatly with shape, they vary also greatly with the central star temperature, while the luminosity is a less important parameter.

  11. Looking Good versus Doing Good: Which Factors Take Precedence when Children Learn about New Tools?

    ERIC Educational Resources Information Center

    DiYanni, Cara; Nini, Deniela; Rheel, Whitney

    2011-01-01

    We present two experiments exploring whether individuals would be persuaded to imitate the intentional action of an adult model whose actions suggest that the correct way to complete a task is with an inefficient tool. In Experiment 1, children ages 5-10 years and a group of adults watched an adult model reject an efficient tool in favor of one…

  12. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy

    PubMed Central

    GURKA, MATTHEW J; KUPERMINC, MICHELLE N; BUSBY, MARJORIE G; BENNIS, JACEY A; GROSSBERG, RICHARD I; HOULIHAN, CHRISTINE M; STEVENSON, RICHARD D; HENDERSON, RICHARD C

    2010-01-01

    AIM To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). METHOD Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I–V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. RESULTS Slaughter’s equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat −9.6/100 [SD 6.2]; 95% confidence interval [CI] −11.0 to −8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI −1.0 to 1.3) than existing equations. INTERPRETATION A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP. PMID:19811518

  13. Limitations of silicon diodes for clinical electron dosimetry.

    PubMed

    Song, Haijun; Ahmad, Munir; Deng, Jun; Chen, Zhe; Yue, Ning J; Nath, Ravinder

    2006-01-01

    This work investigates the relevance of several factors affecting the response of silicon diode dosemeters in depth-dose scans of electron beams. These factors are electron energy, instantaneous dose rate, dose per pulse, photon/electron dose ratio and electron scattering angle (directional response). Data from the literature and our own experiments indicate that the impact of these factors may be up to +/-15%. Thus, the different factors would have to cancel out perfectly at all depths in order to produce true depth-dose curves. There are reports of good agreement between depth-doses measured with diodes and ionisation chambers. However, our measurements with a Scantronix electron field detector (EFD) diode and with a plane-parallel ionisation chamber show discrepancies both in the build-up and in the low-dose regions, with a ratio up to 1.4. Moreover, the absolute sensitivity of two diodes of the same EFD model was found to differ by a factor of 3, and this ratio was not constant but changed with depth between 5 and 15% in the low-dose regions of some clinical electron beams. Owing to these inhomogeneities among diodes even of the same model, corrections for each factor would have to be diode-specific and beam-specific. All these corrections would have to be determined using parallel plane chambers, as recommended by AAPM TG-25, which would be unrealistic in clinical practice. Our conclusion is that in general diodes are not reliable in the measurement of depth-dose curves of clinical electron beams.

  14. Structure of amplitude correlations in open chaotic systems

    NASA Astrophysics Data System (ADS)

    Ericson, Torleif E. O.

    2013-02-01

    The Verbaarschot-Weidenmüller-Zirnbauer (VWZ) model is believed to correctly represent the correlations of two S-matrix elements for an open quantum chaotic system, but the solution has considerable complexity and is presently only accessed numerically. Here a procedure is developed to deduce its features over the full range of the parameter space in a transparent and simple analytical form preserving accuracy to a considerable degree. The bulk of the VWZ correlations are described by the Gorin-Seligman expression for the two-amplitude correlations of the Ericson-Gorin-Seligman model. The structure of the remaining correction factors for correlation functions is discussed with special emphasis of the rôle of the level correlation hole both for inelastic and elastic correlations.

  15. SU-C-304-06: Determination of Intermediate Correction Factors for Three Dosimeters in Small Composite Photon Fields Used in Robotic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, E; Belec, J; Vandervoort, E

    2015-06-15

    Purpose: To calculate using Monte-Carlo the intermediate and total correction factors (CFs) for two microchambers and a plastic scintillator for composite fields delivered by the CyberKnife system. Methods: A linac model was created in BEAMnrc by matching percentage depth dose (PDD) curves and output factors (OFs) measured using an A16 microchamber with Monte Carlo calculations performed in egs-chamber to explicitly model detector response. Intermediate CFs were determined for the A16 and A26 microchambers and the W1 plastic scintillator in fourteen different composite fields inside a solid water phantom. Seven of these fields used a 5 mm diameter collimator; the remainingmore » fields employed a 7.5 mm collimator but were otherwise identical to the first seven. Intermediate CFs are reported relative to the respective CF for a 60 mm collimator (800 mm source to detector distance and 100 mm depth in water). Results: For microchambers in composite fields, the intermediate CFs that account for detector density and volume were the largest contributors to total CFs. The total CFs for the A26 were larger than those for the A16, especially for the 5 mm cone (1.227±0.003 to 1.144±0.004 versus 1.142±0.003 to 1.099±0.004), due to the A26’s larger active volume (0.015 cc) relative to the A16 (0.007 cc), despite the A26 using similar wall and electrode material. The W1 total and intermediate CFs are closer to unity, due to its smaller active volume and near water-equivalent composition, however, 3–4% detector volume corrections are required for 5 mm collimator fields. In fields using the 7.5 mm collimator, the correction is nearly eliminated for the W1 except for a non-isocentric field. Conclusion: Large and variable CFs are required for microchambers in small composite fields primarily due to density and volume effects. Corrections are reduced but not eliminated for a plastic scintillator in the same fields.« less

  16. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  17. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  18. Generative models for discovering sparse distributed representations.

    PubMed Central

    Hinton, G E; Ghahramani, Z

    1997-01-01

    We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. PMID:9304685

  19. Tracking of 25-hydroxyvitamin D status during pregnancy: the importance of vitamin D supplementation.

    PubMed

    Moon, Rebecca J; Crozier, Sarah R; Dennison, Elaine M; Davies, Justin H; Robinson, Sian M; Inskip, Hazel M; Godfrey, Keith M; Cooper, Cyrus; Harvey, Nicholas C

    2015-11-01

    The role of maternal 25-hydroxyvitamin D [25(OH)D] in fetal development is uncertain, and findings of observational studies have been inconsistent. Most studies have assessed 25(OH)D only one time during pregnancy, but to our knowledge, the tracking of an individual's 25(OH)D during pregnancy has not been assessed previously. We determined the tracking of serum 25(OH)D from early to late pregnancy and factors that influence this. The Southampton Women's Survey is a prospective mother-offspring birth-cohort study. Lifestyle, diet, and 25(OH)D status were assessed at 11 and 34 wk of gestation. A Fourier transformation was used to model the seasonal variation in 25(OH)D for early and late pregnancy separately, and the difference between the measured and seasonally modeled 25(OH)D was calculated to generate a season-corrected 25(OH)D. Tracking was assessed with the use of the Pearson correlation coefficient, and multivariate linear regression was used to determine factors associated with the change in season-corrected 25(OH)D. A total of 1753 women had 25(OH)D measured in both early and late pregnancy. There was a moderate correlation between season-corrected 25(OH)D measurements at 11 and 34 wk of gestation (r = 0.53, P < 0.0001; n = 1753). Vitamin D supplementation was the strongest predictor of tracking; in comparison with women who never used supplements, the discontinuation of supplementation after 11 wk was associated with a reduction in season-corrected 25(OH)D (β = -7.3 nmol/L; P < 0.001), whereas the commencement (β = 12.6 nmol/L; P < 0.001) or continuation (β = 6.6 nmol/L; P < 0.001) of supplementation was associated with increases in season-corrected 25(OH)D. Higher pregnancy weight gain was associated with a reduction in season-corrected 25(OH)D (β = -0.4 nmol · L(-1) · kg(-1); P = 0.015), whereas greater physical activity (β = 0.4 nmol/L per h/wk; P = 0.011) was associated with increases. There is a moderate tracking of 25(OH)D status through pregnancy; factors such as vitamin D supplementation, weight gain, and physical activity are associated with changes in season-corrected 25(OH)D from early to late gestation. These findings have implications for study designs and analyses and approaches to intervention studies and clinical care. © 2015 American Society for Nutrition.

  20. Is the PTW 60019 microDiamond a suitable candidate for small field reference dosimetry?

    NASA Astrophysics Data System (ADS)

    De Coste, Vanessa; Francescon, Paolo; Marinelli, Marco; Masi, Laura; Paganini, Lucia; Pimpinella, Maria; Prestopino, Giuseppe; Russo, Serenella; Stravato, Antonella; Verona, Claudio; Verona-Rinati, Gianluca

    2017-09-01

    A systematic study of the PTW microDiamond (MD) output factors (OF) is reported, aimed at clarifying its response in small fields and investigating its suitability for small field reference dosimetry. Ten MDs were calibrated under 60Co irradiation. OF measurements were performed in 6 MV photon beams by a CyberKnife M6, a Varian DHX and an Elekta Synergy linacs. Two PTW silicon diodes E (Si-D) were used for comparison. The results obtained by the MDs were evaluated in terms of absorbed dose to water determination in reference conditions and OF measurements, and compared to the results reported in the recent literature. To this purpose, the Monte Carlo (MC) beam-quality correction factor, kQMD , was calculated for the MD, and the small field output correction factors, k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} , were calculated for both the MD and the Si-D by two different research groups. An empirical function was also derived, providing output correction factors within 0.5% from the MC values calculated for all of the three linacs. A high reproducibility of the dosimetric properties was observed among the ten MDs. The experimental kQMD values are in agreement within 1% with the MC calculated ones. Output correction factors within  +0.7% and  -1.4% were obtained down to field sizes as narrow as 5 mm. The resulting MD and Si-D field factors are in agreement within 0.2% in the case of CyberKnife measurements and 1.6% in the other cases. This latter higher spread of the data was demonstrated to be due to a lower reproducibility of small beam sizes defined by jaws or multi leaf collimators. The results of the present study demonstrate the reproducibility of the MD response and provide a validation of the MC modelling of this device. In principle, accurate reference dosimetry is thus feasible by using the microDiamond dosimeter for field sizes down to 5 mm.

  1. An Uncertainty Structure Matrix for Models and Simulations

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Blattnig, Steve R.; Hemsch, Michael J.; Luckring, James M.; Tripathi, Ram K.

    2008-01-01

    Software that is used for aerospace flight control and to display information to pilots and crew is expected to be correct and credible at all times. This type of software is typically developed under strict management processes, which are intended to reduce defects in the software product. However, modeling and simulation (M&S) software may exhibit varying degrees of correctness and credibility, depending on a large and complex set of factors. These factors include its intended use, the known physics and numerical approximations within the M&S, and the referent data set against which the M&S correctness is compared. The correctness and credibility of an M&S effort is closely correlated to the uncertainty management (UM) practices that are applied to the M&S effort. This paper describes an uncertainty structure matrix for M&S, which provides a set of objective descriptions for the possible states of UM practices within a given M&S effort. The columns in the uncertainty structure matrix contain UM elements or practices that are common across most M&S efforts, and the rows describe the potential levels of achievement in each of the elements. A practitioner can quickly look at the matrix to determine where an M&S effort falls based on a common set of UM practices that are described in absolute terms that can be applied to virtually any M&S effort. The matrix can also be used to plan those steps and resources that would be needed to improve the UM practices for a given M&S effort.

  2. Using Mason number to predict MR damper performance from limited test data

    NASA Astrophysics Data System (ADS)

    Becnel, Andrew C.; Wereley, Norman M.

    2017-05-01

    The Mason number can be used to produce a single master curve which relates MR fluid stress versus strain rate behavior across a wide range of shear rates, temperatures, and applied magnetic fields. As applications of MR fluid energy absorbers expand to a variety of industries and operating environments, Mason number analysis offers a path to designing devices with desired performance from a minimal set of preliminary test data. Temperature strongly affects the off-state viscosity of the fluid, as the passive viscous force drops considerably at higher temperatures. Yield stress is not similarly affected, and stays relatively constant with changing temperature. In this study, a small model-scale MR fluid rotary energy absorber is used to measure the temperature correction factor of a commercially-available MR fluid from LORD Corporation. This temperature correction factor is identified from shear stress vs. shear rate data collected at four different temperatures. Measurements of the MR fluid yield stress are also obtained and related to a standard empirical formula. From these two MR fluid properties - temperature-dependent viscosity and yield stress - the temperature-corrected Mason number is shown to predict the force vs. velocity performance of a full-scale rotary MR fluid energy absorber. This analysis technique expands the design space of MR devices to high shear rates and allows for comprehensive predictions of overall performance across a wide range of operating conditions from knowledge only of the yield stress vs. applied magnetic field and a temperature-dependent viscosity correction factor.

  3. Molecular Volumes and the Stokes-Einstein Equation

    ERIC Educational Resources Information Center

    Edward, John T.

    1970-01-01

    Examines the limitations of the Stokes-Einstein equation as it applies to small solute molecules. Discusses molecular volume determinations by atomic increments, molecular models, molar volumes of solids and liquids, and molal volumes. Presents an empirical correction factor for the equation which applies to molecular radii as small as 2 angstrom…

  4. A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers

    USDA-ARS?s Scientific Manuscript database

    Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...

  5. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    PubMed

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of the risk of SNB, facilitating sound disease management decisions prior to planting of wheat.

  6. How to Collect National Institute of Standards and Technology (NIST) Traceable Fluorescence Excitation and Emission Spectra.

    PubMed

    Gilmore, Adam Matthew

    2014-01-01

    Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.

  7. Flood Identification from Satellite Images Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Chang, L.; Kao, I.; Shih, K.

    2011-12-01

    Typhoons and storms hit Taiwan several times every year and they cause serious flood disasters. Because the rivers are short and steep, and their flows are relatively fast with floods lasting only few hours and usually less than one day. Flood identification can provide the flood disaster and extent information to disaster assistance and recovery centers. Due to the factors of the weather, it is not suitable for aircraft or traditional multispectral satellite; hence, the most appropriate way for investigating flooding extent is to use Synthetic Aperture Radar (SAR) satellite. In this study, back-propagation neural network (BPNN) model and multivariate linear regression (MLR) model are built to identify the flooding extent from SAR satellite images. The input variables of the BPNN model are Radar Cross Section (RCS) value and mean of the pixel, standard deviation, minimum and maximum of RCS values among its adjacent 3×3 pixels. The MLR model uses two images of the non-flooding and flooding periods, and The inputs are the difference between the RCS values of two images and the variances among its adjacent 3×3 pixels. The results show that the BPNN model can perform much better than the MLR model. The correct percentages are more than 80% and 73% in training and testing data, respectively. Many misidentified areas are very fragmented and unrelated. In order to reinforce the correct percentage, morphological image analysis is used to modify the outputs of these identification models. Through morphological operations, most of the small, fragmented and misidentified areas can be correctly assigned to flooding or non-flooding areas. The final results show that the flood identification of satellite images has been improved a lot and the correct percentages increases up to more than 90%.

  8. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  9. Research on the Multiple Factors Influencing Human Identification Based on Pyroelectric Infrared Sensors

    PubMed Central

    Lou, Ping; Hu, Jianmin

    2018-01-01

    Analysis of the multiple factors affecting human identification ability based on pyroelectric infrared technology is a complex problem. First, we examine various sensed pyroelectric waveforms of the human body thermal infrared signal and reveal a mechanism for affecting human identification. Then, we find that the mechanism is decided by the distance, human target, pyroelectric infrared (PIR) sensor, the body type, human moving velocity, signal modulation mask, and Fresnel lens. The mapping relationship between the sensed waveform and multiple influencing factors is established, and a group of mathematical models are deduced which fuse the macro factors and micro factors. Finally, the experimental results show the macro-factors indirectly affect the recognition ability of human based on the pyroelectric technology. At the same time, the correctness and effectiveness of the mathematical models is also verified, which make it easier to obtain more pyroelectric infrared information about the human body for discriminating human targets. PMID:29462908

  10. Ω-slow Solutions and Be Star Disks

    NASA Astrophysics Data System (ADS)

    Araya, I.; Jones, C. E.; Curé, M.; Silaj, J.; Cidale, L.; Granada, A.; Jiménez, A.

    2017-09-01

    As the disk formation mechanism(s) in Be stars is(are) as yet unknown, we investigate the role of rapidly rotating radiation-driven winds in this process. We implemented the effects of high stellar rotation on m-CAK models accounting for the shape of the star, the oblate finite disk correction factor, and gravity darkening. For a fast rotating star, we obtain a two-component wind model, I.e., a fast, thin wind in the polar latitudes and an Ω-slow, dense wind in the equatorial regions. We use the equatorial mass densities to explore Hα emission profiles for the following scenarios: (1) a spherically symmetric star, (2) an oblate star with constant temperature, and (3) an oblate star with gravity darkening. One result of this work is that we have developed a novel method for solving the gravity-darkened, oblate m-CAK equation of motion. Furthermore, from our modeling we find that (a) the oblate finite disk correction factor, for the scenario considering the gravity darkening, can vary by at least a factor of two between the equatorial and polar directions, influencing the velocity profile and mass-loss rate accordingly, (b) the Hα profiles predicted by our model are in agreement with those predicted by a standard power-law model for following values of the line-force parameters: 1.5≲ k≲ 3,α ˜ 0.6, and δ ≳ 0.1, and (c) the contribution of the fast wind component to the Hα emission line profile is negligible; therefore, the line profiles arise mainly from the equatorial disks of Be stars.

  11. Baryon octet electromagnetic form factors in a confining NJL model

    DOE PAGES

    Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloet, Ian C.; ...

    2016-05-25

    Electromagnetic form factors of the baryon octet are studied using a Nambu–Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Here, comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that r p E < r Σ+ E and |r n E| < |r Ξ0 E|, whereas the magnetic radii have a pattern largely consistent with a naivemore » expectation based on the dressed quark masses.« less

  12. Doppler broadening of neutron-induced resonances using ab initio phonon spectrum

    NASA Astrophysics Data System (ADS)

    Noguere, G.; Maldonado, P.; De Saint Jean, C.

    2018-05-01

    Neutron resonances observed in neutron cross section data can only be compared with their theoretical analogues after a correct broadening of the resonance widths. This broadening is usually carried out by two different theoretical models, namely the Free Gas Model and the Crystal Lattice Model, which, however, are only applicable under certain assumptions. Here, we use neutron transmission experiments on UO2 samples at T=23.7 K and T=293.7 K, to investigate the limitations of these models when an ab initio phonon spectrum is introduced in the calculations. Comparisons of the experimental and theoretical transmissions highlight the underestimation of the energy transferred at low temperature and its impact on the accurate determination of the radiation widths Γ_{γ_{λ}} of the 238U resonances λ. The observed deficiency of the model represents an experimental evidence that the Debye-Waller factor is not correctly calculated at low temperature near the Neel temperature ( TN=30.8 K).

  13. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. Attenuation correction for the large non-human primate brain imaging using microPET.

    PubMed

    Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R

    2010-04-21

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a (57)Co transmission point source with a 4% energy window. The optimal energy window for a (68)Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for (57)Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [(18)F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass (57)Co (4% energy window) or (68)Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  15. Attenuation correction for the large non-human primate brain imaging using microPET

    NASA Astrophysics Data System (ADS)

    Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.

    2010-04-01

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  16. Respiratory Health - Exposure Measurements and Modeling in the Fragrance and Flavour Industry.

    PubMed

    Angelini, Eric; Camerini, Gerard; Diop, Malick; Roche, Patrice; Rodi, Thomas; Schippa, Christine; Thomas, Thierry

    2016-01-01

    Although the flavor and fragrance industry is about 150 years old, the use of synthetic materials started more than 100 years ago, and the awareness of the respiratory hazard presented by some flavoring substances emerged only recently. In 2001, the US National Institute of Occupational Safety and Health (NIOSH) identified for the first time inhalation exposure to flavoring substances in the workplace as a possible occupational hazard. As a consequence, manufacturers must comply with a variety of workplace safety requirements, and management has to ensure the improvement of health and safety of the employees exposed to hazardous volatile organic compounds. In this sensitive context, MANE opened its facilities to an intensive measuring campaign with the objective to better estimate the real level of hazardous respiratory exposure of workers. In this study, exposure to 27 hazardous volatile substances were measured during several types of handling operations (weighing-mixing, packaging, reconditioning-transferring), 430 measurement results were generated, and were exploited to propose an improved model derived from the well-known ECETOC-TRA model. The quantification of volatile substances in the working atmosphere involved three main steps: adsorption of the chemicals on a solid support, thermal desorption, followed by analysis by gas chromatography-mass spectrometry. Our approach was to examine experimental measures done in various manufacturing workplaces and to define correction factors to reflect more accurately working conditions and habits. Four correction factors were adjusted in the ECETOC-TRA to integrate important exposure variation factors: exposure duration, percentage of the substance in the composition, presence of collective protective equipment and wearing of personal protective equipment. Verification of the validity of the model is based on the comparison of the values obtained after adaptation of the ECETOC-TRA model, according to various exposure scenarios, with the experimental values measured under real conditions. After examination of the predicted results, 98% of the values obtained with the proposed new model were above the experimental values measured in real conditions. This must be compared with the results of the classical ECETOC-TRA system, which generates only 37% of overestimated values. As the values generated by the new model intended to help decision-makers of the industry to implement adapted protective action and information, and considering the high variability of the working environments, it was of the utmost importance to us not to underestimate the exposure level. The proposed correction factors have been designed to achieve this goal. We wish to propose the present method as an improved monitoring tool to improve respiratory health and safety in the flavor and fragrance manufacturing facilities.

  17. Assessment of Cracks in Stress Concentration Regions with Localized Plastic Zones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, E.

    1998-11-25

    Marty brittle fracture evaluation procedures include plasticity corrections to elastically computed stress intensity factors. These corrections, which are based on the existence of a plastic zone in the vicinity of the crack tip, can overestimate the plasticity effect for a crack embedded in a stress concentration region in which the elastically computed stress exceeds the yield strength of the material in a localized zone. The interactions between the crack, which acts to relieve the high stresses driving the crack, plasticity effects in the stress concentration region, and the nature and source of the loading are examined by formulating explicit flawmore » finite element models for a crack emanating from the root of a notch located in a panel subject to an applied tensile stress. The results of these calculations provide conditions under which a crack-tip plasticity correction based on the Irwin plastic zone size overestimates the plasticity effect. A failure assessment diagram (FAD) curve is used to characterize the effect of plasticity on the crack driving force and to define a less restrictive plasticity correction for cracks at notch roots when load-controlled boundary conditions are imposed. The explicit flaw finite element results also demonstrate that stress intensity factors associated with load-controlled boundary conditions, such as those inherent in the ASME Boiler and Pressure Vessel Code as well as in most handbooks of stress intensity factors, can be much higher than those associated with displacement-controlled conditions, such as those that produce residual or thermal stresses. Under certain conditions, the inclusion of plasticity effects for cracks loaded by displacement-controlled boundary conditions reduces the crack driving force thus justifying the elimination of a plasticity correction for such loadings. The results of this study form the basis for removing unnecessary conservatism from flaw evaluation procedures that utilize plasticity corrections.« less

  18. Encouraging junior community netball players to learn correct safe landing technique.

    PubMed

    White, Peta E; Ullah, Shahid; Donaldson, Alex; Otago, Leonie; Saunders, Natalie; Romiti, Maria; Finch, Caroline F

    2012-01-01

    Behavioural factors and beliefs are important determinants of the adoption of sports injury interventions. This study aimed to understand behavioural factors associated with junior community netball players' intentions to learn correct landing technique during coach-led training sessions, proposed as a means of reducing their risk of lower limb injury. Cross-sectional survey. 287 female players from 58 junior netball teams in the 2007/2008-summer competition completed a 13-item questionnaire developed from the Theory of Planned Behaviour (TPB). This assessed players' attitudes (four items), subjective norms (four), perceived behavioural control (four) and intentions (one) around the safety behaviour of learning correct landing technique at netball training. All items were rated on a seven-point bipolar scale. Cluster-adjusted logistic regression was used to assess which TPB constructs were most associated with strong intentions. Players had positive intentions and attitudes towards learning safe landing technique and perceived positive social pressure from significant others. They also perceived themselves to have considerable control over engaging (or not) in this behaviour. Players' attitudes (p<0.001) and subjective norms (p<0.001), but not perceived behavioural control (p=0.49), were associated with strong intentions to learn correct landing technique at training. Injury prevention implementation strategies aimed at maximising junior players' participation in correct landing training programs should emphasise the benefits of learning correct landing technique (i.e. change attitudes) and involve significant others and role models whom junior players admire (i.e. capitalise on social norms) in the promotion of such programs. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  19. Musculoskeletal disorder symptoms in correction officers: why do they increase rapidly with job tenure?

    PubMed

    Warren, Nicholas; Dussetschleger, Jeffrey; Punnett, Laura; Cherniack, Martin G

    2015-03-01

    In this study, we sought to explain the rapid musculoskeletal symptomatology increase in correction officers (COs). COs are exposed to levels of biomechanical and psychosocial stressors that have strong associations with musculoskeletal disorders (MSDs) in other occupations, possibly contributing to their rapid health deterioration. Baseline survey data from a longitudinal study of COs and manufacturing line workers were used to model musculoskeletal symptom prevalence and intensity in the upper (UE) and lower (LE) extremity. Outcomes were regressed on demographics and biomechanical and psychosocial exposures. COs reported significantly higher prevalence and intensity of LE symptoms compared to the industrial workers. In regression models, job tenure was a primary driver of CO musculoskeletal outcomes. In CO models, a single biomechanical exposure, head and arms in awkward positions, explained variance in both UE and LE prevalence (β of 0.338 and 0.357, respectively), and low decision latitude was associated with increased LE prevalence and intensity (β of 0.229 and 0.233, respectively). Manufacturing models were less explanatory. Examining demographic associations with exposure intensity, we found none to be significant in manufacturing, but in CO models, important psychosocial exposure levels increased with job tenure. Symptom prevalence and intensity increased more rapidly with job tenure in corrections, compared to manufacturing, and were related to both biomechanical and psychosocial exposures. Tenure-related increases in psychosocial exposure levels may help explain the CO symptom increase. Although exposure assessment improvements are proposed, findings suggest focusing on improving the psychosocial work environment to reduce MSD prevalence and intensity in corrections. © 2014, Human Factors and Ergonomics Society.

  20. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  1. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. A convolution model for obtaining the response of an ionization chamber in static non standard fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.

    2012-01-15

    Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of themore » dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.« less

  3. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-03-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shaanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modeled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modeled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI), elevation and aspect have small and additive effects on improving the spatial scaling between these two resolutions.

  4. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-07-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modelled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modelled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI) and elevation have small and additive effects on improving the spatial scaling between these two resolutions.

  5. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  6. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  7. Advanced Corrections for InSAR Using GPS and Numerical Weather Models

    NASA Astrophysics Data System (ADS)

    Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.

    2017-12-01

    We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale meteorological processes allows us to assess under what conditions the technique works most effectively. This work will produce best-practice recommendations for the use of weather models for InSAR correction, and inform efforts to design a global strategy for the NISAR mission, for both low-latency and definitive atmospheric correction products.

  8. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  9. Analysis and optimization of surface profile correcting mechanism of the pitch lap in large-aperture annular polishing

    NASA Astrophysics Data System (ADS)

    Zhang, Huifang; Yang, Minghong; Xu, Xueke; Wu, Lunzhe; Yang, Weiguang; Shao, Jianda

    2017-10-01

    The surface figure control of the conventional annular polishing system is realized ordinarily by the interaction between the conditioner and the lap. The surface profile of the pitch lap corrected by the marble conditioner has been measured and analyzed as a function of kinematics, loading conditions, and polishing time. The surface profile measuring equipment of the large lap based on laser alignment was developed with the accuracy of about 1μm. The conditioning mechanism of the conditioner is simply determined by the kinematics and fully fitting principle, but the unexpected surface profile deviation of the lap emerged frequently due to numerous influencing factors including the geometrical relationship, the pressure distribution at the conditioner/lap interface. Both factors are quantitatively evaluated and described, and have been combined to develop a spatial and temporal model to simulate the surface profile evolution of pitch lap. The simulations are consistent with the experiments. This study is an important step toward deterministic full-aperture annular polishing, providing a beneficial guidance for the surface profile correction of the pitch lap.

  10. Influence of electrolytes in the QCM response: discrimination and quantification of the interference to correct microgravimetric data.

    PubMed

    Encarnação, João M; Stallinga, Peter; Ferreira, Guilherme N M

    2007-02-15

    In this work we demonstrate that the presence of electrolytes in solution generates desorption-like transients when the resonance frequency is measured. Using impedance spectroscopy analysis and Butterworth-Van Dyke (BVD) equivalent electrical circuit modeling we demonstrate that non-Kanazawa responses are obtained in the presence of electrolytes mainly due to the formation of a diffuse electric double layer (DDL) at the sensor surface, which also causes a capacitor like signal. We extend the BVD equivalent circuit by including additional parallel capacitances in order to account for such capacitor like signal. Interfering signals from electrolytes and DDL perturbations were this way discriminated. We further quantified as 8.0+/-0.5 Hz pF-1 the influence of electrolytes to the sensor resonance frequency and we used this factor to correct the data obtained by frequency counting measurements. The applicability of this approach is demonstrated by the detection of oligonucleotide sequences. After applying the corrective factor to the frequency counting data, the mass contribution to the sensor signal yields identical values when estimated by impedance analysis and frequency counting.

  11. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  12. RAYLEIGH–TAYLOR UNSTABLE FLAMES—FAST OR FASTER?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, E. P., E-mail: eph2001@columbia.edu

    2015-04-20

    Rayleigh–Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate bothmore » models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.« less

  13. Rayleigh-Taylor Unstable Flames -- Fast or Faster?

    NASA Astrophysics Data System (ADS)

    Hicks, E. P.

    2015-04-01

    Rayleigh-Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate both models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.

  14. 75 FR 81952 - Greenhouse Gas Emissions Standards and Fuel Efficiency Standards for Medium- and Heavy-Duty...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    .... The correct values that should have been used in the document are a factor of 1,018 grams of CO 2 per gallon of diesel for conversion of diesel fuel, and a factor of 8,887 grams of CO 2 per gallon of... Sleeper cab Class 7 Class 8 Class 8 2014 Model Year CO2 Grams per Ton-Mile Low Roof 104 79 65 Mid Roof 104...

  15. Tracheo-bronchial soft tissue and cartilage resonances in the subglottal acoustic input impedance.

    PubMed

    Lulich, Steven M; Arsikere, Harish

    2015-06-01

    This paper offers a re-evaluation of the mechanical properties of the tracheo-bronchial soft tissues and cartilage and uses a model to examine their effects on the subglottal acoustic input impedance. It is shown that the values for soft tissue elastance and cartilage viscosity typically used in models of subglottal acoustics during phonation are not accurate, and corrected values are proposed. The calculated subglottal acoustic input impedance using these corrected values reveals clusters of weak resonances due to soft tissues (SgT) and cartilage (SgC) lining the walls of the trachea and large bronchi, which can be observed empirically in subglottal acoustic spectra. The model predicts that individuals may exhibit SgT and SgC resonances to variable degrees, depending on a number of factors including tissue mechanical properties and the dimensions of the trachea and large bronchi. Potential implications for voice production and large pulmonary airway tissue diseases are also discussed.

  16. Ion radial diffusion in an electrostatic impulse model for stormtime ring current formation

    NASA Technical Reports Server (NTRS)

    Chen, Margaret W.; Schulz, Michael; Lyons, Larry R.; Gorney, David J.

    1992-01-01

    Two refinements to the quasi-linear theory of ion radial diffusion are proposed and examined analytically with simulations of particle trajectories. The resonance-broadening correction by Dungey (1965) is applied to the quasi-linear diffusion theory by Faelthammar (1965) for an individual model storm. Quasi-linear theory is then applied to the mean diffusion coefficients resulting from simulations of particle trajectories in 20 model storms. The correction for drift-resonance broadening results in quasi-linear diffusion coefficients with discrepancies from the corresponding simulated values that are reduced by a factor of about 3. Further reductions in the discrepancies are noted following the averaging of the quasi-linear diffusion coefficients, the simulated coefficients, and the resonance-broadened coefficients for the 20 storms. Quasi-linear theory provides good descriptions of particle transport for a single storm but performs even better in conjunction with the present ensemble-averaging.

  17. Effects of spatial disturbance on common loon nest site selection and territory success

    USGS Publications Warehouse

    McCarthy, K.P.; DeStefano, S.

    2011-01-01

    The common loon (Gavia immer) breeds during the summer on northern lakes and water bodies that are also often desirable areas for aquatic recreation and human habitation. In northern New England, we assessed how the spatial nature of disturbance affects common loon nest site selection and territory success. We found through classification and regression analysis that distance to and density of disturbance factors can be used to classify observed nest site locations versus random points, suggesting that these factors affect loon nest site selection (model 1: Correct classification = 75%, null = 50%, K = 0.507, P < 0.001; model 2: Correct classification = 78%, null = 50%, K = 0.551, P < 0.001). However, in an exploratory analysis, we were unable to show a relation between spatial disturbance variables and breeding success (P = 0.595, R 2 = 0.436), possibly because breeding success was so low during the breeding seasons of 2007-2008. We suggest that by selecting nest site locations that avoid disturbance factors, loons thereby limit the effect that disturbance will have on their breeding success. Still, disturbance may force loons to use sub-optimal nesting habitat, limiting the available number of territories, and overall productivity. We advise that management efforts focus on limiting disturbance factors to allow breeding pairs access to the best nesting territories, relieving disturbance pressures that may force sub-optimal nest placement. ?? 2011 The Wildlife Society.

  18. Nomograms Predicting Progression-Free Survival, Overall Survival, and Pelvic Recurrence in Locally Advanced Cervical Cancer Developed From an Analysis of Identifiable Prognostic Factors in Patients From NRG Oncology/Gynecologic Oncology Group Randomized Trials of Chemoradiotherapy

    PubMed Central

    Rose, Peter G.; Java, James; Whitney, Charles W.; Stehman, Frederick B.; Lanciano, Rachelle; Thomas, Gillian M.; DiSilvestro, Paul A.

    2015-01-01

    Purpose To evaluate the prognostic factors in locally advanced cervical cancer limited to the pelvis and develop nomograms for 2-year progression-free survival (PFS), 5-year overall survival (OS), and pelvic recurrence. Patients and Methods We retrospectively reviewed 2,042 patients with locally advanced cervical carcinoma enrolled onto Gynecologic Oncology Group clinical trials of concurrent cisplatin-based chemotherapy and radiotherapy. Nomograms for 2-year PFS, five-year OS, and pelvic recurrence were created as visualizations of Cox proportional hazards regression models. The models were validated by bootstrap-corrected, relatively unbiased estimates of discrimination and calibration. Results Multivariable analysis identified prognostic factors including histology, race/ethnicity, performance status, tumor size, International Federation of Gynecology and Obstetrics stage, tumor grade, pelvic node status, and treatment with concurrent cisplatin-based chemotherapy. PFS, OS, and pelvic recurrence nomograms had bootstrap-corrected concordance indices of 0.62, 0.64, and 0.73, respectively, and were well calibrated. Conclusion Prognostic factors were used to develop nomograms for 2-year PFS, 5-year OS, and pelvic recurrence for locally advanced cervical cancer clinically limited to the pelvis treated with concurrent cisplatin-based chemotherapy and radiotherapy. These nomograms can be used to better estimate individual and collective outcomes. PMID:25732170

  19. Supernovae Discovery Efficiency

    NASA Astrophysics Data System (ADS)

    John, Colin

    2018-01-01

    Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.

  20. Exploration of the factor structure of the Kirton Adaption-Innovation Inventory using bootstrapping estimation.

    PubMed

    Im, Subin; Min, Soonhong

    2013-04-01

    Exploratory factor analyses of the Kirton Adaption-Innovation Inventory (KAI), which serves to measure individual cognitive styles, generally indicate three factors: sufficiency of originality, efficiency, and rule/group conformity. In contrast, a 2005 study by Im and Hu using confirmatory factor analysis supported a four-factor structure, dividing the sufficiency of originality dimension into two subdimensions, idea generation and preference for change. This study extends Im and Hu's (2005) study of a derived version of the KAI by providing additional evidence of the four-factor structure. Specifically, the authors test the robustness of the parameter estimates to the violation of normality assumptions in the sample using bootstrap methods. A bias-corrected confidence interval bootstrapping procedure conducted among a sample of 356 participants--members of the Arkansas Household Research Panel, with middle SES and average age of 55.6 yr. (SD = 13.9)--showed that the four-factor model with two subdimensions of sufficiency of originality fits the data significantly better than the three-factor model in non-normality conditions.

  1. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  2. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.

  3. Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs

    DTIC Science & Technology

    2014-09-01

    peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a

  4. Electroweak Higgs production with HiggsPO at NLO QCD

    NASA Astrophysics Data System (ADS)

    Greljo, Admir; Isidori, Gino; Lindert, Jonas M.; Marzocca, David; Zhang, Hantian

    2017-12-01

    We present the HiggsPO UFO model for Monte Carlo event generation of electroweak VH and VBF Higgs production processes at NLO in QCD in the formalism of Higgs pseudo-observables (PO). We illustrate the use of this tool by studying the QCD corrections, matched to a parton shower, for several benchmark points in the Higgs PO parameter space. We find that, while being sizable and thus important to be considered in realistic experimental analyses, the QCD higher-order corrections largely factorize. As an additional finding, based on the NLO results, we advocate to consider 2D distributions of the two-jet azimuthal-angle difference and the leading jet p_T for new physics searches in VBF Higgs production. The HiggsPO UFO model is publicly available.

  5. Percolation in three-dimensional fracture networks for arbitrary size and shape distributions

    NASA Astrophysics Data System (ADS)

    Thovert, J.-F.; Mourzenko, V. V.; Adler, P. M.

    2017-04-01

    The percolation threshold of fracture networks is investigated by extensive direct numerical simulations. The fractures are randomly located and oriented in three-dimensional space. A very wide range of regular, irregular, and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. The results are rationalized in terms of a dimensionless density. A simple model involving a new shape factor is proposed, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy in monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions.

  6. 40 CFR 98.463 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...

  7. 40 CFR 98.463 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...

  8. Dual assimilation of satellite soil moisture to improve flood prediction in ungauged catchments

    USDA-ARS?s Scientific Manuscript database

    This paper explores the use of active and passive satellite soil moisture products for improving stream flow prediction within 4 large (>5,000km2) semi-arid catchments. We use the probability distributed model (PDM) under a data-scarce scenario and aim at correcting two key controlling factors in th...

  9. Predicting wildfire behavior in black spruce forests in Alaska.

    Treesearch

    Rodney A. Norum

    1982-01-01

    The current fire behavior system, when properly adjusted, accurately predicts forward rate of spread and flame length of wildfires in black spruce (Picea mariana (Mill.) B.S.P.) forests in Alaska. After fire behavior was observed and quantified, adjustment factors were calculated and assigned to the selected fuel models to correct the outputs to...

  10. Evaluating natural resource amenities in a human life expectancy production function

    Treesearch

    Neelam C. Poudyal; Donald G. Hodges; J.M. Bowker; H.K. Cordell

    2009-01-01

    This study examined the effect of natural resource amenities on human life expectancy. Extending the existing model of the life expectancy production function, and correcting for spatial dependence, we evaluated the determinants of life expectancy using county level data. Results indicate that after controlling for socio-demographic and economic factors, medical...

  11. 40 CFR 98.343 - Calculating GHG emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... potential (metric tons CH4/metric ton waste) = MCF × DOC × DOCF × F × 16/12. MCF = Methane correction factor... = Methane emissions from the landfill in the reporting year (metric tons CH4). GCH 4 = Modeled methane...). Emissions = Methane emissions from the landfill in the reporting year (metric tons CH4). R = Quantity of...

  12. Digital particle image velocimetry measurements of the downwash distribution of a desert locust Schistocerca gregaria

    PubMed Central

    Bomphrey, Richard J; Taylor, Graham K; Lawson, Nicholas J; Thomas, Adrian L.R

    2005-01-01

    Actuator disc models of insect flight are concerned solely with the rate of momentum transfer to the air that passes through the disc. These simple models assume that an even pressure is applied across the disc, resulting in a uniform downwash distribution. However, a correction factor, k, is often included to correct for the difference in efficiency between the assumed even downwash distribution, and the real downwash distribution. In the absence of any empirical measurements of the downwash distribution behind a real insect, the values of k used in the literature have been necessarily speculative. Direct measurement of this efficiency factor is now possible, and could be used to compare the relative efficiencies of insect flight across the Class. Here, we use Digital Particle Image Velocimetry to measure the instantaneous downwash distribution, mid-downstroke, of a tethered desert locust (Schistocerca gregaria). By integrating the downwash distribution, we are thereby able to provide the first direct empirical measurement of k for an insect. The measured value of k=1.12 corresponds reasonably well with that predicted by previous theoretical studies. PMID:16849240

  13. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  14. Alpharetroviral Vector-mediated Gene Therapy for X-CGD: Functional Correction and Lack of Aberrant Splicing

    PubMed Central

    Kaufmann, Kerstin B.; Brendel, Christian; Suerth, Julia D.; Mueller-Kuller, Uta; Chen-Wichmann, Linping; Schwäble, Joachim; Pahujani, Shweta; Kunkel, Hana; Schambach, Axel; Baum, Christopher; Grez, Manuel

    2013-01-01

    Comparative integrome analysis has revealed that the most neutral integration pattern among retroviruses is attributed to alpharetroviruses. We chose X-linked chronic granulomatous disease (X-CGD) as model to evaluate the potential of self-inactivating (SIN) alpharetroviral vectors for gene therapy of monogenic diseases. Therefore, we combined the alpharetroviral vector backbone with the elongation factor-1α short promoter, both considered to possess a low genotoxic profile, to drive transgene (gp91phox) expression. Following efficient transduction transgene expression was sustained and provided functional correction of the CGD phenotype in a cell line model at low vector copy number. Further analysis in a murine X-CGD transplantation model revealed gene-marking of bone marrow cells and oxidase positive granulocytes in peripheral blood. Transduction of human X-CGD CD34+ cells provided functional correction up to wild-type levels and long-term expression upon transplantation into a humanized mouse model. In contrast to lentiviral vectors, no aberrantly spliced transcripts containing cellular exons fused to alpharetroviral sequences were found in transduced cells, implying that the safety profile of alpharetroviral vectors may extend beyond their neutral integration profile. Taken together, this highlights the potential of this SIN alpharetroviral system as a platform for new candidate vectors for future gene therapy of hematopoietic disorders. PMID:23207695

  15. Multimaterial 3D printing preoperative planning for frontoethmoidal meningoencephalocele surgery.

    PubMed

    Coelho, Giselle; Chaves, Thailane Marie Feitosa; Goes, Ademil Franco; Del Massa, Emilio C; Moraes, Osmar; Yoshida, Maurício

    2018-04-01

    Surgical correction of frontoethmoidal meningoencephalocele, although rare, is still challenging to neurosurgeons and plastic reconstructive surgeons. It is fundamental to establish reliable and safe surgical techniques. The twenty-first century has brought great advances in medical technology, and the 3D models can mimic the correct tridimensional anatomical relation of a tissue organ or body part. They allow both tactile and spatial understanding of the lesion and organ involved. The 3D printing technology allows the preparation for specific surgery ahead of time, planning the surgical approach and developing plans to deal with uncommon and high-risk intraoperative scenarios. The present report describes a case of frontoethmoidal encephalocele, (nasofrontal subtype) of a 19-month-old girl, whose surgical correction was planned using 3D printing modeling. The 3D model allowed a detailed discussion of the aspects of the surgical approach by having tissues of different consistencies and resistances, and also predicting with millimetric precision the bilateral orbitotomy measurements. Moreover, it was a fundamental and valuable factor in the multidisciplinary preoperative discussion. This approach allowed reducing the time of surgery, accurately planning the location of the osteotomies and precontouring the osteosynthesis material. 3D models can be very helpful tools in planning complex craniofacial operative procedures.

  16. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    PubMed Central

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645

  17. Low Speed and High Speed Correlation of SMART Active Flap Rotor Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B. R.

    2010-01-01

    Measured, open loop and closed loop data from the SMART rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. One open loop high-speed case and four closed loop cases are considered. The closed loop cases include three high-speed cases and one low-speed case. Two of these high-speed cases include a 2 deg flap deflection at 5P case and a test maximum-airspeed case. This study follows a recent, open loop correlation effort that used a simple correction factor for the airfoil pitching moment Mach number. Compared to the earlier effort, the current open loop study considers more fundamental corrections based on advancing blade aerodynamic conditions. The airfoil tables themselves have been studied. Selected modifications to the HH-06 section flap airfoil pitching moment table are implemented. For the closed loop condition, the effect of the flap actuator is modeled by increased flap hinge stiffness. Overall, the open loop correlation is reasonable, thus confirming the basic correctness of the current semi-empirical modifications; the closed loop correlation is also reasonable considering that the current flap model is a first generation model. Detailed correlation results are given in the paper.

  18. Estimating long-run equilibrium real exchange rates: short-lived shocks with long-lived impacts on Pakistan.

    PubMed

    Zardad, Asma; Mohsin, Asma; Zaman, Khalid

    2013-12-01

    The purpose of this study is to investigate the factors that affect real exchange rate volatility for Pakistan through the co-integration and error correction model over a 30-year time period, i.e. between 1980 and 2010. The study employed the autoregressive conditional heteroskedasticity (ARCH), generalized autoregressive conditional heteroskedasticity (GARCH) and Vector Error Correction model (VECM) to estimate the changes in the volatility of real exchange rate series, while an error correction model was used to determine the short-run dynamics of the system. The study is limited to a few variables i.e., productivity differential (i.e., real GDP per capita relative to main trading partner); terms of trade; trade openness and government expenditures in order to manage robust data. The result indicates that real effective exchange rate (REER) has been volatile around its equilibrium level; while, the speed of adjustment is relatively slow. VECM results confirm long run convergence of real exchange rate towards its equilibrium level. Results from ARCH and GARCH estimation shows that real shocks volatility persists, so that shocks die out rather slowly, and lasting misalignment seems to have occurred.

  19. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  20. COMPARISON OF EXPERIMENTS TO CFD MODELS FOR MIXING USING DUAL OPPOSING JETS IN TANKS WITH AND WITHOUT INTERNAL OBSTRUCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leishear, R.; Poirier, M.; Lee, S.

    2012-06-26

    This paper documents testing methods, statistical data analysis, and a comparison of experimental results to CFD models for blending of fluids, which were blended using a single pump designed with dual opposing nozzles in an eight foot diameter tank. Overall, this research presents new findings in the field of mixing research. Specifically, blending processes were clearly shown to have random, chaotic effects, where possible causal factors such as turbulence, pump fluctuations, and eddies required future evaluation. CFD models were shown to provide reasonable estimates for the average blending times, but large variations -- or scatter -- occurred for blending timesmore » during similar tests. Using this experimental blending time data, the chaotic nature of blending was demonstrated and the variability of blending times with respect to average blending times were shown to increase with system complexity. Prior to this research, the variation in blending times caused discrepancies between CFD models and experiments. This research addressed this discrepancy, and determined statistical correction factors that can be applied to CFD models, and thereby quantified techniques to permit the application of CFD models to complex systems, such as blending. These blending time correction factors for CFD models are comparable to safety factors used in structural design, and compensate variability that cannot be theoretically calculated. To determine these correction factors, research was performed to investigate blending, using a pump with dual opposing jets which re-circulate fluids in the tank to promote blending when fluids are added to the tank. In all, eighty-five tests were performed both in a tank without internal obstructions and a tank with vertical obstructions similar to a tube bank in a heat exchanger. These obstructions provided scale models of vertical cooling coils below the liquid surface for a full scale, liquid radioactive waste storage tank. Also, different jet diameters and different horizontal orientations of the jets were investigated with respect to blending. Two types of blending tests were performed. The first set of eighty-one tests blended small quantities of tracer fluids into solution. Data from these tests were statistically evaluated to determine blending times for the addition of tracer solution to tanks, and blending times were successfully compared to Computational Fluid Dynamics (CFD) models. The second set of four tests blended bulk quantities of solutions of different density and viscosity. For example, in one test a quarter tank of water was added to a three quarters of a tank of a more viscous salt solution. In this case, the blending process was noted to significantly change due to stratification of fluids, and blending times increased substantially. However, CFD models for stratification and the variability of blending times for different density fluids was not pursued, and further research is recommended in the area of blending bulk quantities of fluids. All in all, testing showed that CFD models can be effectively applied if statistically validated through experimental testing, but in the absence of experimental validation CFD model scan be extremely misleading as a basis for design and operation decisions.« less

  1. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  2. Modeling drivers of phosphorus loads in Chesapeake Bay tributaries and inferences about long-term change

    USGS Publications Warehouse

    Ryberg, Karen R.; Blomquist, Joel; Sprague, Lori A.; Sekellick, Andrew J.; Keisman, Jennifer

    2018-01-01

    Causal attribution of changes in water quality often consists of correlation, qualitative reasoning, listing references to the work of others, or speculation. To better support statements of attribution for water-quality trends, structural equation modeling was used to model the causal factors of total phosphorus loads in the Chesapeake Bay watershed. By transforming, scaling, and standardizing variables, grouping similar sites, grouping some causal factors into latent variable models, and using methods that correct for assumption violations, we developed a structural equation model to show how causal factors interact to produce total phosphorus loads. Climate (in the form of annual total precipitation and the Palmer Hydrologic Drought Index) and anthropogenic inputs are the major drivers of total phosphorus load in the Chesapeake Bay watershed. Increasing runoff due to natural climate variability is offsetting purposeful management actions that are otherwise decreasing phosphorus loading; consequently, management actions may need to be reexamined to achieve target reductions in the face of climate variability.

  3. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    PubMed

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  4. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  5. Comparison of silver and molybdenum microfocus X-ray sources for single-crystal structure determination.

    PubMed

    Krause, Lennard; Herbst-Irmer, Regine; Sheldrick, George M; Stalke, Dietmar

    2015-02-01

    The quality of diffraction data obtained using silver and molybdenum microsources has been compared for six model compounds with a wide range of absorption factors. The experiments were performed on two 30 W air-cooled Incoatec IµS microfocus sources with multilayer optics mounted on a Bruker D8 goniometer with a SMART APEX II CCD detector. All data were analysed, processed and refined using standard Bruker software. The results show that Ag  K α radiation can be beneficial when heavy elements are involved. A numerical absorption correction based on the positions and indices of the crystal faces is shown to be of limited use for the highly focused microsource beams, presumably because the assumption that the crystal is completely bathed in a (top-hat profile) beam of uniform intensity is no longer valid. Fortunately the empirical corrections implemented in SADABS , although originally intended as a correction for absorption, also correct rather well for the variations in the effective volume of the crystal irradiated. In three of the cases studied (two Ag and one Mo) the final SHELXL R 1 against all data after application of empirical corrections implemented in SADABS was below 1%. Since such corrections are designed to optimize the agreement of the intensities of equivalent reflections with different paths through the crystal but the same Bragg 2θ angles, a further correction is required for the 2θ dependence of the absorption. For this, SADABS uses the transmission factor of a spherical crystal with a user-defined value of μ r (where μ is the linear absorption coefficient and r is the effective radius of the crystal); the best results are obtained when r is biased towards the smallest crystal dimension. The results presented here suggest that the IUCr publication requirement that a numerical absorption correction must be applied for strongly absorbing crystals is in need of revision.

  6. Joint pricing and production management: a geometric programming approach with consideration of cubic production cost function

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Hamidi Hesarsorkh, Aghil; Mohammadi, Mehdi; Bonyadi Naeini, Ali

    2015-06-01

    Coordination and harmony between different departments of a company can be an important factor in achieving competitive advantage if the company corrects alignment between strategies of different departments. This paper presents an integrated decision model based on recent advances of geometric programming technique. The demand of a product considers as a power function of factors such as product's price, marketing expenditures, and consumer service expenditures. Furthermore, production cost considers as a cubic power function of outputs. The model will be solved by recent advances in convex optimization tools. Finally, the solution procedure is illustrated by numerical example.

  7. Quantifying cancer cell receptors with paired-agent fluorescent imaging: a novel method to account for tissue optical property effects

    NASA Astrophysics Data System (ADS)

    Sadeghipour, Negar; Davis, Scott C.; Tichauer, Kenneth M.

    2018-02-01

    Dynamic fluorescence imaging approaches can be used to estimate the concentration of cell surface receptors in vivo. Kinetic models are used to generate the final estimation by taking the targeted imaging agent concentration as a function of time. However, tissue absorption and scattering properties cause the final readout signal to be on a different scale than the real fluorescent agent concentration. In paired-agent imaging approaches, simultaneous injection of a suitable control imaging agent with a targeted one can account for non-specific uptake and retention of the targeted agent. Additionally, the signal from the control agent can be a normalizing factor to correct for tissue optical property differences. In this study, the kinetic model used for paired-agent imaging analysis (i.e., simplified reference tissue model) is modified and tested in simulation and experimental data in a way that accounts for the scaling correction within the kinetic model fit to the data to ultimately extract an estimate of the targeted biomarker concentration.

  8. Hydrological impacts of climate change on the Tejo and Guadiana Rivers

    NASA Astrophysics Data System (ADS)

    Kilsby, C. G.; Tellier, S. S.; Fowler, H. J.; Howels, T. R.

    2007-05-01

    A distributed daily rainfall runoff model is applied to the Tejo and Guadiana river basins in Spain and Portugal to simulate the effects of climate change on runoff production, river flows and water resource availability with results aggregated to the monthly level. The model is calibrated, validated and then used for a series of climate change impact assessments for the period 2070 2100. Future scenarios are derived from the HadRM3H regional climate model (RCM) using two techniques: firstly a bias-corrected RCM output, with monthly mean correction factors calculated from observed rainfall records; and, secondly, a circulation-pattern-based stochastic rainfall model. Major reductions in rainfall and streamflow are projected throughout the year; these results differ from those for previous studies where winter increases are projected. Despite uncertainties in the representation of heavily managed river systems, the projected impacts are serious and pose major threats to the maintenance of bipartite water treaties between Spain and Portugal and the supply of water to urban and rural regions of Portugal.

  9. On the distortion of elevation dependent warming signals by quantile mapping

    NASA Astrophysics Data System (ADS)

    Jury, Martin W.; Mendlik, Thomas; Maraun, Douglas

    2017-04-01

    Elevation dependent warming (EDW), the amplification of warming under climate change with elevation, is likely to accelerate changes in e.g. cryospheric and hydrological systems. Responsible for EDW is a mixture of processes including snow albedo feedback, cloud formations or the location of aerosols. The degree of incorporation of this processes varies across state of the art climate models. In a recent study we were preparing bias corrected model output of CMIP5 GCMs and CORDEX RCMs over the Himalayan region for the glacier modelling community. In a first attempt we used quantile mapping (QM) to generate this data. A beforehand model evaluation showed that more than two third of the 49 included climate models were able to reproduce positive trend differences between areas of higher and lower elevations in winter, clearly visible in all of our five observational datasets used. Regrettably, we noticed that height dependent trend signals provided by models were distorted, most of the time in the direction of less EDW, sometimes even reversing EDW signals present in the models before the bias correction. As a consequence, we refrained from using quantile mapping for our task, as EDW poses one important factor influencing the climate in high altitudes for the nearer and more distant future, and used a climate change signal preserving bias correction approach. Here we present our findings of the distortion of the EDW temperature change by QM and discuss the influence of QM on different statistical properties as well as their modifications.

  10. Analysis of interacting entropy-corrected holographic and new agegraphic dark energies

    NASA Astrophysics Data System (ADS)

    Ranjit, Chayan; Debnath, Ujjal

    In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.

  11. Reversal of Phenotypic Abnormalities by CRISPR/Cas9-Mediated Gene Correction in Huntington Disease Patient-Derived Induced Pluripotent Stem Cells.

    PubMed

    Xu, Xiaohong; Tay, Yilin; Sim, Bernice; Yoon, Su-In; Huang, Yihui; Ooi, Jolene; Utami, Kagistia Hana; Ziaei, Amin; Ng, Bryan; Radulescu, Carola; Low, Donovan; Ng, Alvin Yu Jin; Loh, Marie; Venkatesh, Byrappa; Ginhoux, Florent; Augustine, George J; Pouladi, Mahmoud A

    2017-03-14

    Huntington disease (HD) is a dominant neurodegenerative disorder caused by a CAG repeat expansion in HTT. Here we report correction of HD human induced pluripotent stem cells (hiPSCs) using a CRISPR-Cas9 and piggyBac transposon-based approach. We show that both HD and corrected isogenic hiPSCs can be differentiated into excitable, synaptically active forebrain neurons. We further demonstrate that phenotypic abnormalities in HD hiPSC-derived neural cells, including impaired neural rosette formation, increased susceptibility to growth factor withdrawal, and deficits in mitochondrial respiration, are rescued in isogenic controls. Importantly, using genome-wide expression analysis, we show that a number of apparent gene expression differences detected between HD and non-related healthy control lines are absent between HD and corrected lines, suggesting that these differences are likely related to genetic background rather than HD-specific effects. Our study demonstrates correction of HD hiPSCs and associated phenotypic abnormalities, and the importance of isogenic controls for disease modeling using hiPSCs. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Simple statistical bias correction techniques greatly improve moderate resolution air quality forecast at station level

    NASA Astrophysics Data System (ADS)

    Curci, Gabriele; Falasca, Serena

    2017-04-01

    Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.

  13. ECOSAR model performance with a large test set of industrial chemicals.

    PubMed

    Reuschenbach, Peter; Silvani, Maurizio; Dammann, Martina; Warnecke, Dietmar; Knacker, Thomas

    2008-05-01

    The widely used ECOSAR computer programme for QSAR prediction of chemical toxicity towards aquatic organisms was evaluated by using large data sets of industrial chemicals with varying molecular structures. Experimentally derived toxicity data covering acute effects on fish, Daphnia and green algae growth inhibition of in total more than 1,000 randomly selected substances were compared to the prediction results of the ECOSAR programme in order (1) to assess the capability of ECOSAR to correctly classify the chemicals into defined classes of aquatic toxicity according to rules of EU regulation and (2) to determine the number of correct predictions within tolerance factors from 2 to 1,000. Regarding ecotoxicity classification, 65% (fish), 52% (Daphnia) and 49% (algae) of the substances were correctly predicted into the classes "not harmful", "harmful", "toxic" and "very toxic". At all trophic levels about 20% of the chemicals were underestimated in their toxicity. The class of "not harmful" substances (experimental LC/EC(50)>100 mg l(-1)) represents nearly half of the whole data set. The percentages for correct predictions of toxic effects on fish, Daphnia and algae growth inhibition were 69%, 64% and 60%, respectively, when a tolerance factor of 10 was allowed. Focussing on those experimental results which were verified by analytically measured concentrations, the predictability for Daphnia and algae toxicity was improved by approximately three percentage points, whereas for fish no improvement was determined. The calculated correlation coefficients demonstrated poor correlation when the complete data set was taken, but showed good results for some of the ECOSAR chemical classes. The results are discussed in the context of literature data on the performance of ECOSAR and other QSAR models.

  14. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  15. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  16. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  17. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  18. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  19. Exhaustive testing of recent oceanic and Earth tidal models using combination of tide gravity data from GGP and ICET data banks

    NASA Astrophysics Data System (ADS)

    Kopaev, A.; Ducarme, B.

    2003-04-01

    We have used the most recent oceanic tidal models e.g. FES’99/02, GOT’00, CSR’4, NAO’99 and TPXO’5/6 for tidal gravity loading computations using LOAD’97 software. Resulting loading vectors were compared against each other in different regions located at different distances from the sea coast. Results indicate good coincidence for majority of models at the distances larger than 100-200 km, excluding some regions where mostly CSR’4 and TPXO have problems. Outlying models were rejected for this regions and mean loading vectors have been calculated for more than 200 tidal gravity stations from GGP and ICET data banks, representing state of the art of tidal loading correction. Corresponding errors in d-factors and phase lags are generally smaller than 0.1 % resp. 0.05o, that means that we do not have the real troubles with loading corrections and more attention should be applied to the calibration values and phase lag determination accuracies. Corrected values agree with DDW model values very well (within 0.2 %) for majority of GGP stations, whereas some of very good (Chinese network mainly) ICET tidal gravity stations clearly demonstrate statistically significant (up to 0.5 %) anomalies that seems not connected either with calibration troubles or loading problems. Various possible reasons including instrumental and geophysical will be presented and discussed.

  20. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  1. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. A Wind Tunnel Study on the Mars Pathfinder (MPF) Lander Descent Pressure Sensor

    NASA Technical Reports Server (NTRS)

    Soriano, J. Francisco; Coquilla, Rachael V.; Wilson, Gregory R.; Seiff, Alvin; Rivell, Tomas

    2001-01-01

    The primary focus of this study was to determine the accuracy of the Mars Pathfinder lander local pressure readings in accordance with the actual ambient atmospheric pressures of Mars during parachute descent. In order to obtain good measurements, the plane of the lander pressure sensor opening should ideally be situated so that it is parallel to the freestream. However, due to two unfavorable conditions, the sensor was positioned in locations where correction factors are required. One of these disadvantages is due to the fact that the parachute attachment point rotated the lander's center of gravity forcing the location of the pressure sensor opening to be off tangent to the freestream. The second and most troublesome factor was that the lander descends with slight oscillations that could vary the amplitude of the sensor readings. In order to accurately map the correction factors required at each sensor position, an experiment simulating the lander descent was conducted in the Martian Surface Wind Tunnel at NASA Ames Research Center. Using a 115 scale model at Earth ambient pressures, the test settings provided the necessary Reynolds number conditions in which the actual lander was possibly subjected to during the descent. In the analysis and results of this experiment, the readings from the lander sensor were converted to the form of pressure coefficients. With a contour map of pressure coefficients at each lander oscillatory position, this report will provide a guideline to determine the correction factors required for the Mars Pathfinder lander descent pressure sensor readings.

  3. Real-time correction of tsunami site effect by frequency-dependent tsunami-amplification factor

    NASA Astrophysics Data System (ADS)

    Tsushima, H.

    2017-12-01

    For tsunami early warning, I developed frequency-dependent tsunami-amplification factor and used it to design a recursive digital filter that can be applicable for real-time correction of tsunami site response. In this study, I assumed that a tsunami waveform at an observing point could be modeled by convolution of source, path and site effects in time domain. Under this assumption, spectral ratio between offshore and the nearby coast can be regarded as site response (i.e. frequency-dependent amplification factor). If the amplification factor can be prepared before tsunamigenic earthquakes, its temporal convolution to offshore tsunami waveform provides tsunami prediction at coast in real time. In this study, tsunami waveforms calculated by tsunami numerical simulations were used to develop frequency-dependent tsunami-amplification factor. Firstly, I performed numerical tsunami simulations based on nonlinear shallow-water theory from many tsuanmigenic earthquake scenarios by varying the seismic magnitudes and locations. The resultant tsunami waveforms at offshore and the nearby coastal observing points were then used in spectral-ratio analysis. An average of the resulted spectral ratios from the tsunamigenic-earthquake scenarios is regarded as frequency-dependent amplification factor. Finally, the estimated amplification factor is used in design of a recursive digital filter that can be applicable in time domain. The above procedure is applied to Miyako bay at the Pacific coast of northeastern Japan. The averaged tsunami-height spectral ratio (i.e. amplification factor) between the location at the center of the bay and the outside show a peak at wave-period of 20 min. A recursive digital filter based on the estimated amplification factor shows good performance in real-time correction of tsunami-height amplification due to the site effect. This study is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant 15K16309.

  4. On charged particle equilibrium violation in external photon fields.

    PubMed

    Bouchard, Hugo; Seuntjens, Jan; Palmans, Hugo

    2012-03-01

    In a recent paper by Bouchard et al. [Med. Phys. 36(10), 4654-4663 (2009)], a theoretical model of quality correction factors for idealistic so-called plan-class specific reference (PCSR) fields was proposed. The reasoning was founded on the definition of PCSR fields made earlier by Alfonso et al. [Med. Phys. 35(11), 5179-5186 (2008)], requiring the beam to achieve charged particle equilibrium (CPE), in a time-averaged sense, in the reference medium. The relation obtained by Bouchard et al. was derived using Fano's theorem (1954) which states that if CPE is established in a given medium, the dose is independent of point-to-point density variations. A potential misconception on the achievability of the condition required by Fano (1954) might be responsible for false practical conclusions, both in the definition of PCSR fields as well as the theoretical model of quality correction factor. In this paper, the practical achievability of CPE in external beams is treated in detail. The fact that this condition is not achievable in single or composite deliveries is illustrated by an intuitive method and is also formally demonstrated. Fano's theorem is not applicable in external beam radiation dosimetry without (virtually) removing attenuation effects, and therefore, the relation conditionally defined by Bouchard et al. (2009) cannot be valid in practice. A definition of PCSR fields in the recent formalism for nonstandard beams proposed by Alfonso et al. (2008) should be modified, revising the criterion of CPE condition. The authors propose reconsidering the terminology used to describe standard and nonstandard beams. The authors argue that quality correction factors of intensity modulated radiation therapy PCSR fields (i.e., k(Q(pcsr),Q) (f(pcsr),f(ref) )) could be unity under ideal conditions, but it is concluded that further investigation is necessary to confirm that hypothesis.

  5. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  6. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  7. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  8. Measurement and Modeling of the Optical Scattering Properties of Crop Canopies

    NASA Technical Reports Server (NTRS)

    Vanderbilt, V. C. (Principal Investigator)

    1985-01-01

    The specular reflection process is shown to be a key aspect of radiation transfer by plant canopies. Polarization measurements are demonstrated as the tool for determining the specular and diffuse portions of the canopy radiance. The magnitude of the specular fraction of the reflectance is significant compared to the magnitude of the diffuse fraction. Therefore, it is necessary to consider specularly reflected light in developing and evaluating light-canopy interaction models for wheat canopies. Models which assume leaves are diffuse reflectors correctly predict only the diffuse fraction of the canopy reflectance factor. The specular reflectance model, when coupled with a diffuse leaf model, would predict both the specular and diffuse portions of the reflectance factor. The specular model predicts and the data analysis confirms that the single variable, angle of incidence of specularly reflected sunlight on the leaf, explains much of variation in the polarization data as a function of view-illumination directions.

  9. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  10. What determines nutritional recovery in malnourished children after correction of congenital heart defects?

    PubMed

    Vaidyanathan, Balu; Radhakrishnan, Reshma; Sarala, Deepa Aravindakshan; Sundaram, Karimassery Ramaiyar; Kumar, Raman Krishna

    2009-08-01

    Malnutrition is common in children with congenital heart disease (CHD), especially in developing countries. To examine the impact of corrective intervention on the nutritional status of children with CHD and identify factors associated with suboptimal recovery. Consecutive patients with CHD in a tertiary center in South India were evaluated for nutritional status before and 2 years after corrective intervention. Anthropometry was performed at presentation and every 6 months for 2 years, and z scores were compared. Malnutrition was defined as a weight-for-age, height-for-age, and weight/height z score <-2. Determinants of malnutrition were entered into a multivariate logistic regression analysis model. Of 476 patients undergoing corrective intervention (surgical: 344; catheter-based: 132) z scores of less than -2 for weight for age, height for age, and weight/height were recorded in 59%, 26.3%, and 55.9% of patients, respectively, at presentation. On follow-up (425 patients [92.5% of survivors; 20.63 +/- 13.1 months of age]), z scores for weight for age and weight/height improved significantly from the baseline (weight: -1.42 +/- 1.03 vs -2.19 +/- 1.16; P < .001; weight/height: -1.15 +/- 1.25 vs -2.09 +/- 1.3; P < .001). Height-for-age z scores were not significantly different. Malnutrition persisted in 116 (27.3%) patients on follow-up and was associated with a birth weight of

  11. Generalized model screening potentials for Fermi-Dirac plasmas

    NASA Astrophysics Data System (ADS)

    Akbari-Moghanjoughi, M.

    2016-04-01

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  12. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  13. Improving Lidar Turbulence Estimates for Wind Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.

    2016-10-06

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less

  14. Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets

    NASA Astrophysics Data System (ADS)

    Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.

    2017-12-01

    Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.

  15. The effects of the exhaust plume on the lightning triggering conditions for launch vehicles

    NASA Technical Reports Server (NTRS)

    Eriksen, Frederick J.; Rudolph, Terence H.; Perala, Rodney A.

    1991-01-01

    Apollo 12 and Atlas Centaur 67 are two launch vehicles that have experienced triggered lightning strikes. Serious consequences resulted from the events; in the case of Atlas Centaur 67, the vehicle and the payload were lost. These events indicate that it is necessary to develop launch rules which would prevent such occurrences. In order to develop valid lightning related rules, it is necessary to understand the effects of the plume. Some have assumed that the plume can be treated as a perfect conductor, and have computed electric field enhancement factors on that basis. The authors have looked at the plume, and believe that these models are not correct, because they ignore the fluid motion of the conducting plates. The authors developed a model which includes this flow character. In this model, the external field is excluded from the plume as it would be for any good conductor, but, in addition, the charge must distribute so that the charge density is zero at some location in the exhaust. When this condition is included in the calculation of triggering enhancement factors, they can be two to three times larger than calculated by other methods which include a conductive plume but don't include the correct boundary conditions. Here, the authors review the relevant features of rocket exhausts for the triggered lightning problem, present an approach for including flowing conductive gases, and present preliminary calculations to demonstrate the effect that the plume has on enhancement factors.

  16. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  17. An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang

    2018-06-01

    There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of fracturing layer and fracturing construction schemes and the improvement of oil recovery.

  18. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  19. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  20. Development of a hierarchical model for predicting microbiological contamination of private groundwater supplies in a geologically heterogeneous region.

    PubMed

    O'Dwyer, Jean; Hynds, Paul D; Byrne, Kenneth A; Ryan, Michael P; Adley, Catherine C

    2018-06-01

    Private groundwater sources in the Republic of Ireland provide drinking water to an estimated 750,000 people or 16% of the national population. Consumers of untreated groundwater are at increased risk of infection from pathogenic microorganisms. However, given the volume of private wells in operation, remediation or even quantification of public risk is both costly and time consuming. In this study, a hierarchical logistic regression model was developed to 'predict' contamination with E. coli based on the results of groundwater quality analyses of private wells (n = 132) during the period of September 2011 to November 2012. Assessment of potential microbial contamination risk factors were categorised into three groups: Intrinsic (environmental factors), Specific (local features) and Infrastructural (groundwater source characteristics) which included a total of 15 variables. Overall, 51.4% of wells tested positive for E. coli during the study period with univariate analysis indicating that 11 of the 15 assessed risk factors, including local bedrock type, local subsoil type, septic tank reliance, 5 day antecedent precipitation and temperature, along with well type and depth, were all significantly associated with E. coli presence (p < 0.05). Hierarchical logistic regression was used to develop a private well susceptibility model with the final model containing 8 of the 11 associated variables. The model was shown to be highly efficient; correctly classifying the presence of E. coli in 94.2% of cases, and the absence of E. coli in 84.7% of cases. Model validation was performed using an external data set (n = 32) and it was shown that the model has promising accuracy with 90% of positive E. coli cases correctly predicted. The developed model represents a risk assessment and management tool that may be used to develop effective water-quality management strategies to minimize public health risks both in Ireland and abroad. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  2. Spectroscopic Factors From the Single Neutron Pickup Reaction ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Wirth, H.-F.; Herten-Berger, R.

    2008-10-01

    A great deal of attention has recently been paid towards high precision superallowed β-decay Ft values. With the availability of extremely high precision (<0.1%) experimental data, the precision on Ft is now limited by the ˜1% theoretical corrections.ootnotetextI.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008). This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking correction calculations become more difficult due to the truncated model space. Experimental data is needed to help constrain input parameters for these calculations, and thus experimental spectroscopic factors for these nuclei are important. Preliminary results from the single-nucleon-transfer reaction ^64Zn(d,t)^63Zn will be presented, and the implications for calculations of isospin-symmetry breaking in the superallowed &+circ; decay of ^62Ga will be discussed.

  3. Superallowed Fermi β-Decay Studies with SCEPTAR and the 8π Gamma-Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Koopmans, K. A.

    2005-04-01

    The 8π Gamma-Ray Spectrometer, operating at TRIUMF in Vancouver Canada, is a high-precision instrument for detecting the decay radiations from exotic nuclei. In 2003, a new beta-scintillating array called SCEPTAR was installed within the 8π Spectrometer. With these two systems, precise measurements of half-lives and branching ratios can be made, specifically on certain nuclei which exhibit Superallowed Fermi 0+ → 0+ β-decay. These data can be used to determine the value of δC, an isospin symmetry-breaking (Coulomb) correction factor to good precision. As this correction factor is currently one of the leading sources of error in the unitarity test of the CKM matrix, a precise determination of its value could help to eliminate any possible "trivial" explanation of the seeming departure of current experimental data from Standard Model predictions.

  4. Accurate Computation of Electric Field Enhancement Factors for Metallic Nanoparticles Using the Discrete Dipole Approximation

    PubMed Central

    2010-01-01

    We model the response of nanoscale Ag prolate spheroids to an external uniform static electric field using simulations based on the discrete dipole approximation, in which the spheroid is represented as a collection of polarizable subunits. We compare the results of simulations that employ subunit polarizabilities derived from the Clausius–Mossotti relation with those of simulations that employ polarizabilities that include a local environmental correction for subunits near the spheroid’s surface [Rahmani et al. Opt Lett 27: 2118 (2002)]. The simulations that employ corrected polarizabilities give predictions in very good agreement with exact results obtained by solving Laplace’s equation. In contrast, simulations that employ uncorrected Clausius–Mossotti polarizabilities substantially underestimate the extent of the electric field “hot spot” near the spheroid’s sharp tip, and give predictions for the field enhancement factor near the tip that are 30 to 50% too small. PMID:20672062

  5. Application of Statistical Thermodynamics To Predict the Adsorption Properties of Polypeptides in Reversed-Phase HPLC.

    PubMed

    Tarasova, Irina A; Goloborodko, Anton A; Perlova, Tatyana Y; Pridatchenko, Marina L; Gorshkov, Alexander V; Evreinov, Victor V; Ivanov, Alexander R; Gorshkov, Mikhail V

    2015-07-07

    The theory of critical chromatography for biomacromolecules (BioLCCC) describes polypeptide retention in reversed-phase HPLC using the basic principles of statistical thermodynamics. However, whether this theory correctly depicts a variety of empirical observations and laws introduced for peptide chromatography over the last decades remains to be determined. In this study, by comparing theoretical results with experimental data, we demonstrate that the BioLCCC: (1) fits the empirical dependence of the polypeptide retention on the amino acid sequence length with R(2) > 0.99 and allows in silico determination of the linear regression coefficients of the log-length correction in the additive model for arbitrary sequences and lengths and (2) predicts the distribution coefficients of polypeptides with an accuracy from 0.98 to 0.99 R(2). The latter enables direct calculation of the retention factors for given solvent compositions and modeling of the migration dynamics of polypeptides separated under isocratic or gradient conditions. The obtained results demonstrate that the suggested theory correctly relates the main aspects of polypeptide separation in reversed-phase HPLC.

  6. Target/error overlap in jargonaphasia: The case for a one-source model, lexical and non-lexical summation, and the special status of correct responses.

    PubMed

    Olson, Andrew; Halloran, Elizabeth; Romani, Cristina

    2015-12-01

    We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. 3D magnetotelluric inversion system with static shift correction and theoretical assessment in oil and gas exploration

    NASA Astrophysics Data System (ADS)

    Dong, H.; Kun, Z.; Zhang, L.

    2015-12-01

    This magnetotelluric (MT) system contains static shift correction and 3D inversion. The correction method is based on the data study on 3D forward modeling and field test. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with zero-cost, and avoids the additional field work and indoor processing with good results shown in Figure 1a-e. Figure 1a shows a normal model (I) without any local heterogeneity. Figure 1b shows a static-shifted model (II) with two local heterogeneous bodies (10 and 1000 ohm.m). Figure 1c is the inversion result (A) for the synthetic data generated from model I. Figure 1d is the inversion result (B) for the static-shifted data generated from model II. Figure 1e is the inversion result (C) for the static-shifted data from model II, but with static shift correction. The results show that the correction method is useful. The 3D inversion algorithm is improved base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the frequency based parallel structure, improved the computational efficiency, reduced the memory of computer, added the topographic and marine factors, and added the constraints of geology and geophysics. So the 3D inversion could even work in PAD with high efficiency and accuracy. The application example of theoretical assessment in oil and gas exploration is shown in Figure 1f-i. The synthetic geophysical model consists of five layers (from top to downwards): shale, limestone, gas, oil, groundwater and limestone overlying a basement rock. Figure 1f-g show the 3D model and central profile. Figure 1h shows the centrel section of 3D inversion, the resultsd show a high degree of reduction in difference on the synthetic model. Figure 1i shows the seismic waveform reflects the interfaces of every layer overall, but the relative positions of the interface of the two-way travel time vary, and the interface between limestone and oil at the sides of the section is not reflected. So 3-D MT can make up for the deficiency of the seismic results such as the fake sync-phase axis and multiple waves.

  8. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  9. 40 CFR 98.463 - Calculating GHG emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in....464(b)(4)(i), use a default value of 1.0. MCF = Methane correction factor (fraction). Use the default... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...

  10. SOLUBILITIES OF CARBON DIOXIDE IN METHANOL AND METHANOL-WATER AT HIGH PRESSURES: EXPERIMENTAL DATA AND MODELING

    EPA Science Inventory

    The solubilities of carbon dioxide in methanol and in methanol-water mixtures have been measured at 243, 258, 273, and 298 K, and at pressures up to 54 atm. An extended Soave-Redlich-Kwong equation of state with Mathias' polar correction factor has been used to describe the equil...

  11. Effect of formulated glyphosate and adjuvant tank mixes on atomization from aerial application flat fan nozzles

    USDA-ARS?s Scientific Manuscript database

    This study was designed to determine if the present USDA ARS Spray Nozzle models based on water plus non-ionic surfactant spray solutions could be used to estimate spray droplet size data for different spray formulations through use of experimentally determined correction factors or if full spray fo...

  12. Measurement Error Correction Formula for Cluster-Level Group Differences in Cluster Randomized and Observational Studies

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Preacher, Kristopher J.

    2016-01-01

    Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…

  13. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  14. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  15. Nonassociative plasticity model for cohesionless materials and its implementation in soil-structure interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashmi, Q.S.E.

    A constitutive model based on rate-independent elastoplasticity concepts is developed and used to simulate the behavior of geologic materials under arbitrary three-dimensional stress paths. The model accounts for various factors such as friction, stress path, and stress history that influence the behavior of geologic materials. A hierarchical approach is adopted whereby models of progressively increasing sophistication are developed from a basic isotropic-hardening associate model. Nonassociativeness is introduced as correction or perturbation to the basic model. Deviation of normality of the plastic-strain increments to the yield surface F is captured through nonassociativeness. The plastic potential Q is obtained by applying amore » correction to F. This simplified approach restricts the number of extra parameters required to define the plastic potential Q. The material constants associated with the model are identified, and they are evaluated for three different sands (Leighton Buzzard, Munich and McCormick Ranch). The model is then verified by comparing predictions with laboratory tests from which the constants were found, and typical tests not used for finding the constants. Based on the above findings, a soil-footing system is analyzed using finite-element techniques.« less

  16. Progress in Research on Diurnal and Semidiurnal Earth Rotation Change

    NASA Astrophysics Data System (ADS)

    Xu, Xueqing

    2015-08-01

    We mainly focus on the progress of research on high frequency changes in the earth rotation. Firstly, we review the development course and main motivating factors of the diurnal and semidiurnal earth rotation change. In recent decades, earth orientation has been monitored with increasing accuracy by advanced space-geodetic techniques, including lunar and satellite laser ranging, very long baseline interferometry and the global positioning system. We are able to obtain the Earth Rotation Parameters (ERP, polar motion and rotation rate changes) by even 1 to 2 hours observation data, form which obvious diurnal and semidiurnal signals can be detected, and compare them with the predicted results by the ocean model. Both the amplitude and phase are in good agreement in the main diurnal and semidiurnal wave frequency, especially for the UT1, whose compliance is 90%, and 60% for polar motion, there are 30% motivating factor of the diurnal and semidiurnal polar motion have not been identified. Then we comprehensively review the different types of global ocean tidal correction models since the last eighties century, as well as the application research on diurnal and semidiurnal polar motion and UT1, the current ocean tidal correction models have 10% to 20% uncertainty, and need for further refinement.

  17. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Department of Physics, Tomsk State University,Lenin Ave. 36, Tomsk 634050; Department of Physics and Astronomy, Pacific Institue for Theoretical Physics,University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1

    2016-01-20

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ϵ and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  18. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Nesterov, D.V.; Kamenshchik, A.Yu., E-mail: barvin@td.lpi.ru, E-mail: Alexander.Kamenshchik@bo.infn.it, E-mail: nesterov@td.lpi.ru

    2016-01-01

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ε and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  19. Animal Models of Hemophilia

    PubMed Central

    Sabatino, Denise E.; Nichols, Timothy C.; Merricks, Elizabeth; Bellinger, Dwight A.; Herzog, Roland W.; Monahan, Paul E.

    2013-01-01

    The X-linked bleeding disorder hemophilia is caused by mutations in coagulation factor VIII (hemophilia A) or factor IX (hemophilia B). Unless prophylactic treatment is provided, patients with severe disease (less than 1% clotting activity) typically experience frequent spontaneous bleeds. Current treatment is largely based on intravenous infusion of recombinant or plasma-derived coagulation factor concentrate. More effective factor products are being developed. Moreover, gene therapies for sustained correction of hemophilia are showing much promise in pre-clinical studies and in clinical trials. These advances in molecular medicine heavily depend on availability of well-characterized small and large animal models of hemophilia, primarily hemophilia mice and dogs. Experiments in these animals represent important early and intermediate steps of translational research aimed at development of better and safer treatments for hemophilia, such a protein and gene therapies or immune tolerance protocols. While murine models are excellent for studies of large groups of animals using genetically defined strains, canine models are important for testing scale-up and for longer-term follow-up as well as for studies that require larger blood volumes. PMID:22137432

  20. Improved correction for the tissue fraction effect in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Millner, Lynn; Hutton, Brian F.; Maher, Toby M.; Groves, Ashley M.; Thielemans, Kris

    2015-09-01

    Recently, there has been an increased interest in imaging different pulmonary disorders using PET techniques. Previous work has shown, for static PET/CT, that air content in the lung influences reconstructed image values and that it is vital to correct for this ‘tissue fraction effect’ (TFE). In this paper, we extend this work to include the blood component and also investigate the TFE in dynamic imaging. CT imaging and PET kinetic modelling are used to determine fractional air and blood voxel volumes in six patients with idiopathic pulmonary fibrosis. These values are used to illustrate best and worst case scenarios when interpreting images without correcting for the TFE. In addition, the fractional volumes were used to determine correction factors for the SUV and the kinetic parameters. These were then applied to the patient images. The kinetic parameters K1 and Ki along with the static parameter SUV were all found to be affected by the TFE with both air and blood providing a significant contribution to the errors. Without corrections, errors range from 34-80% in the best case and 29-96% in the worst case. In the patient data, without correcting for the TFE, regions of high density (fibrosis) appeared to have a higher uptake than lower density (normal appearing tissue), however this was reversed after air and blood correction. The proposed correction methods are vital for quantitative and relative accuracy. Without these corrections, images may be misinterpreted.

  1. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  2. Latent Factor Models and Analyses for Operator Response Times

    DTIC Science & Technology

    1990-09-01

    since imputation is based on presumption of model correctness, and the missing and non-standard values are imputed using the presumed model. The same...oN 0’a w’ w0. NI. 0. . INOC - .I O CO.1 0-CN NO.ION I.3-.03.-.. 0 0 0 0 oOO , 0 0 0 - w0 0 0 0 0 0 In 0 0 0 0 0 0 0 w 0 0 0 0 0 0 0 0 0 0 0 0aO . o w

  3. Polyglutamine Disease Modeling: Epitope Based Screen for Homologous Recombination using CRISPR/Cas9 System.

    PubMed

    An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M

    2014-04-15

    We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.

  4. The "Residential" Effect Fallacy in Neighborhood and Health Studies: Formal Definition, Empirical Identification, and Correction.

    PubMed

    Chaix, Basile; Duncan, Dustin; Vallée, Julie; Vernez-Moudon, Anne; Benmarhnia, Tarik; Kestens, Yan

    2017-11-01

    Because of confounding from the urban/rural and socioeconomic organizations of territories and resulting correlation between residential and nonresidential exposures, classically estimated residential neighborhood-outcome associations capture nonresidential environment effects, overestimating residential intervention effects. Our study diagnosed and corrected this "residential" effect fallacy bias applicable to a large fraction of neighborhood and health studies. Our empirical application investigated the effect that hypothetical interventions raising the residential number of services would have on the probability that a trip is walked. Using global positioning systems tracking and mobility surveys over 7 days (227 participants and 7440 trips), we employed a multilevel linear probability model to estimate the trip-level association between residential number of services and walking to derive a naïve intervention effect estimate and a corrected model accounting for numbers of services at the residence, trip origin, and trip destination to determine a corrected intervention effect estimate (true effect conditional on assumptions). There was a strong correlation in service densities between the residential neighborhood and nonresidential places. From the naïve model, hypothetical interventions raising the residential number of services to 200, 500, and 1000 were associated with an increase by 0.020, 0.055, and 0.109 of the probability of walking in the intervention groups. Corrected estimates were of 0.007, 0.019, and 0.039. Thus, naïve estimates were overestimated by multiplicative factors of 3.0, 2.9, and 2.8. Commonly estimated residential intervention-outcome associations substantially overestimate true effects. Our somewhat paradoxical conclusion is that to estimate residential effects, investigators critically need information on nonresidential places visited.

  5. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  6. Moderate forest disturbance as a stringent test for gap and big-leaf models

    NASA Astrophysics Data System (ADS)

    Bond-Lamberty, B.; Fisk, J. P.; Holm, J. A.; Bailey, V.; Bohrer, G.; Gough, C. M.

    2015-01-01

    Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. It is thus unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models - Biome-BGC (BioGeochemical Cycles), a classic big-leaf model, and the ZELIG and ED (Ecosystem Demography) gap-oriented models - could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experiment in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ZELIG and ED correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes, in particular gross primary production or net primary production (NPP). Biome-BGC NPP was correctly resilient but for the wrong reasons, and could not match the absolute observational values. ZELIG and ED, in contrast, exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. It is thus an open question whether most ecosystem models will simulate correctly the gradual and less extensive tree mortality characteristic of moderate disturbances.

  7. Moderate forest disturbance as a stringent test for gap and big-leaf models

    DOE PAGES

    Bond-Lamberty, Benjamin; Fisk, Justin P.; Holm, Jennifer; ...

    2015-01-27

    Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. It is thus unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models – Biome-BGC (BioGeochemical Cycles), a classic big-leaf model, and the ZELIG and ED (Ecosystem Demography) gap-oriented models – could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experimentmore » in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ZELIG and ED correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes, in particular gross primary production or net primary production (NPP). Biome-BGC NPP was correctly resilient but for the wrong reasons, and could not match the absolute observational values. ZELIG and ED, in contrast, exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. It is thus an open question whether most ecosystem models will simulate correctly the gradual and less extensive tree mortality characteristic of moderate disturbances.« less

  8. Experimental verification of a gain reduction model for the space charge effect in a wire chamber

    NASA Astrophysics Data System (ADS)

    Nagakura, Naoki; Fujii, Kazuki; Harayama, Isao; Kato, Yu; Sekiba, Daiichiro; Watahiki, Yumi; Yamashita, Satoru

    2018-01-01

    A wire chamber often suffers significant saturation of the multiplication factor when the electric field around its wires is strong. An analytical model of this effect has previously been proposed [Y. Arimoto et al., Nucl. Instrum. Meth. Phys. Res. A 799, 187 (2015)], in which the saturation was described by the multiplication factor, energy deposit density per wire length, and one constant parameter. In order to confirm the validity of this model, a multi-wire drift chamber was developed and irradiated by a MeV-range proton beam at the University of Tsukuba. The saturation effect was compared for energy deposits ranging from 70 keV/cm to 180 keV/cm and multiplication factors 3× 103 to 3× 104. The chamber was rotated with respect to the proton beam in order to vary the space charge density around the wires. The energy deposit distribution corrected for the effect was consistent with the result of a Monte Carlo simulation, thus validating the proposed model.

  9. Modeling non-linear growth responses to temperature and hydrology in wetland trees

    NASA Astrophysics Data System (ADS)

    Keim, R.; Allen, S. T.

    2016-12-01

    Growth responses of wetland trees to flooding and climate variations are difficult to model because they depend on multiple, apparently interacting factors, but are a critical link in hydrological control of wetland carbon budgets. To more generally understand tree growth to hydrological forcing, we modeled non-linear responses of tree ring growth to flooding and climate at sub-annual time steps, using Vaganov-Shashkin response functions. We calibrated the model to six baldcypress tree-ring chronologies from two hydrologically distinct sites in southern Louisiana, and tested several hypotheses of plasticity in wetlands tree responses to interacting environmental variables. The model outperformed traditional multiple linear regression. More importantly, optimized response parameters were generally similar among sites with varying hydrological conditions, suggesting generality to the functions. Model forms that included interacting responses to multiple forcing factors were more effective than were single response functions, indicating the principle of a single limiting factor is not correct in wetlands and both climatic and hydrological variables must be considered in predicting responses to hydrological or climate change.

  10. Scatter and cross-talk corrections in simultaneous Tc-99m/I-123 brain SPECT using constrained factor analysis and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Maksud, P.; Kijewski, M. F.; Haberi, M. O.; Todd-Pokropek, A.; Aurengo, A.; Moore, S. C.

    2000-08-01

    Simultaneous imaging of Tc-99m and I-123 would have a high clinical potential in the assessment of brain perfusion (Tc-99m) and neurotransmission (I-123) but is hindered by cross-talk between the two radionuclides. Monte Carlo simulations of 15 different dual-isotope studies were performed using a digital brain phantom. Several physiologic Tc-99m and I-123 uptake patterns were modeled in the brain structures. Two methods were considered to correct for cross-talk from both scattered and unscattered photons: constrained spectral factor analysis (SFA) and artificial neural networks (ANN). The accuracy and precision of reconstructed pixel values within several brain structures were compared to those obtained with an energy windowing method (WSA). In I-123 images, mean bias was close to 10% in all structures for SFA and ANN and between 14% (in the caudate nucleus) and 25% (in the cerebellum) for WSA. Tc-99m activity was overestimated by 35% in the cortex and 53% in the caudate nucleus with WSA, but by less than 9% in all structures with SFA and ANN. SFA and ANN performed well even in the presence of high-energy I-123 photons. The accuracy was greatly improved by incorporating the contamination into the SFA model or in the learning phase for ANN. SFA and ANN are promising approaches to correct for cross-talk in simultaneous Tc-99m/I-123 SPECT.

  11. Concentration of stresses and strains in a notched cyclinder of a viscoplastic material under harmonic loading

    NASA Astrophysics Data System (ADS)

    Zhuk, Ya A.; Senchenkov, I. K.

    1999-02-01

    Certain aspects of the correct definitions of stress and strain concentration factors for elastic-viscoplastic solids under cyclic loading are discussed. Problems concerning the harmonic kinematic excitation of cylindrical specimens with a lateral V-notch are examined. The behavior of the material of a cylinder is modeled using generalized flow theory. An approximate model based on the concept of complex moduli is used for comparison. Invariant characteristics such as stress and strain intensities and maximum principal stress and strain are chosen as constitutive quantities for concentration-factor definitions. The behavior of time-varying factors is investigated. Concentration factors calculated in terms of the amplitudes of the constitutive quantities are used as representative characteristics over the cycle of vibration. The dependences of the concentration factors on the loads are also studied. The accuracy of Nueber's and Birger's formulas is evaluated. The solution of the problem in the approximate formulation agrees with its solution in the exact formulation. The possibilities of the approximate model for estimating low-cycle fatigue are evaluated.

  12. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  13. A Numerical Model for Wind-Wave Prediction in Deep Water.

    DTIC Science & Technology

    1983-01-01

    amounts of gage data are available. Additionally, if all steps are modeled correctly, factors such as direction and angular spreading, which are not...spherical orthogonal system if large oceanic areas are to be modeled. The wave model requires a rect- angular grid and wind input at each of the...RM22CNFREQ+1)u1. DO 70 Im1,NFREG 70 SINF(I)uTWOPI*690/(TWOPIIFF(l))3S5 C DO 17 ItJ𔃻,100 VST =O,4851.4$IU USTwVST 19 ZOaCl/UST+C2*UST$UST-C3 UST1= VST /ALOG

  14. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  15. In Vivo Gene Therapy of Hemophilia B: Sustained Partial Correction in Factor IX-Deficient Dogs

    NASA Astrophysics Data System (ADS)

    Kay, Mark A.; Rothenberg, Steven; Landen, Charles N.; Bellinger, Dwight A.; Leland, Frances; Toman, Carol; Finegold, Milton; Thompson, Arthur R.; Read, M. S.; Brinkhous, Kenneth M.; Woo, Savio L. C.

    1993-10-01

    The liver represents a model organ for gene therapy. A method has been developed for hepatic gene transfer in vivo by the direct infusion of recombinant retroviral vectors into the portal vasculature, which results in the persistent expression of exogenous genes. To determine if these technologies are applicable for the treatment of hemophilia B patients, preclinical efficacy studies were done in a hemophilia B dog model. When the canine factor IX complementary DNA was transduced directly into the hepatocytes of affected dogs in vivo, the animals constitutively expressed low levels of canine factor IX for more than 5 months. Persistent expression of the clotting. factor resulted in reductions of whole blood clotting and partial thromboplastin times of the treated animals. Thus, long-term treatment of hemophilia B patients may be feasible by direct hepatic gene therapy in vivo.

  16. Respiratory Health – Exposure Measurements and Modeling in the Fragrance and Flavour Industry

    PubMed Central

    Angelini, Eric; Camerini, Gerard; Diop, Malick; Roche, Patrice; Rodi, Thomas; Schippa, Christine; Thomas, Thierry

    2016-01-01

    Although the flavor and fragrance industry is about 150 years old, the use of synthetic materials started more than 100 years ago, and the awareness of the respiratory hazard presented by some flavoring substances emerged only recently. In 2001, the US National Institute of Occupational Safety and Health (NIOSH) identified for the first time inhalation exposure to flavoring substances in the workplace as a possible occupational hazard. As a consequence, manufacturers must comply with a variety of workplace safety requirements, and management has to ensure the improvement of health and safety of the employees exposed to hazardous volatile organic compounds. In this sensitive context, MANE opened its facilities to an intensive measuring campaign with the objective to better estimate the real level of hazardous respiratory exposure of workers. In this study, exposure to 27 hazardous volatile substances were measured during several types of handling operations (weighing-mixing, packaging, reconditioning-transferring), 430 measurement results were generated, and were exploited to propose an improved model derived from the well-known ECETOC-TRA model. The quantification of volatile substances in the working atmosphere involved three main steps: adsorption of the chemicals on a solid support, thermal desorption, followed by analysis by gas chromatography-mass spectrometry. Our approach was to examine experimental measures done in various manufacturing workplaces and to define correction factors to reflect more accurately working conditions and habits. Four correction factors were adjusted in the ECETOC-TRA to integrate important exposure variation factors: exposure duration, percentage of the substance in the composition, presence of collective protective equipment and wearing of personal protective equipment. Verification of the validity of the model is based on the comparison of the values obtained after adaptation of the ECETOC-TRA model, according to various exposure scenarios, with the experimental values measured under real conditions. After examination of the predicted results, 98% of the values obtained with the proposed new model were above the experimental values measured in real conditions. This must be compared with the results of the classical ECETOC-TRA system, which generates only 37% of overestimated values. As the values generated by the new model intended to help decision-makers of the industry to implement adapted protective action and information, and considering the high variability of the working environments, it was of the utmost importance to us not to underestimate the exposure level. The proposed correction factors have been designed to achieve this goal. We wish to propose the present method as an improved monitoring tool to improve respiratory health and safety in the flavor and fragrance manufacturing facilities. PMID:26863607

  17. Effects of spatial disturbance on common loon nest site selection and territory success

    USGS Publications Warehouse

    McCarthy, Kyle P.; DeStefano, Stephen

    2011-01-01

    The common loon (Gavia immer) breeds during the summer on northern lakes and water bodies that are also often desirable areas for aquatic recreation and human habitation. In northern New England, we assessed how the spatial nature of disturbance affects common loon nest site selection and territory success. We found through classification and regression analysis that distance to and density of disturbance factors can be used to classify observed nest site locations versus random points, suggesting that these factors affect loon nest site selection (model 1: Correct classification = 75%, null = 50%, K = 0.507, P < 0.001; model 2: Correct classification = 78%, null = 50%, K = 0.551, P < 0.001). However, in an exploratory analysis, we were unable to show a relation between spatial disturbance variables and breeding success (P = 0.595, R2 = 0.436), possibly because breeding success was so low during the breeding seasons of 2007–2008. We suggest that by selecting nest site locations that avoid disturbance factors, loons thereby limit the effect that disturbance will have on their breeding success. Still, disturbance may force loons to use sub-optimal nesting habitat, limiting the available number of territories, and overall productivity. We advise that management efforts focus on limiting disturbance factors to allow breeding pairs access to the best nesting territories, relieving disturbance pressures that may force sub-optimal nest placement.

  18. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  19. [Nonpharmacological correction of low back pain by single or integrated means of medical rehabilitation and the evaluation of their effectiveness].

    PubMed

    Sakalauskiene, Giedre

    2009-01-01

    Low back pain is a global worldwide problem. A great attention is given to correction of this health status by a wide range of rehabilitation specialists. Some single or integrated physical factors, physiotherapy, specific and nonspecific physical exercises, alternative methods of treatment, also the complex of multidisciplinary rehabilitation means are applied in the management of low back pain. The evidence-based data are analyzed in order to identify which nonpharmacological means are effective in pain correction; in addition, the effectiveness of various methods and models of low back pain management are compared in this article. Research data evaluating the impact effectiveness of single or integrated means of rehabilitation are very controversial. There are no evidence-based specific recommendations for the correction of this health status objectively assessing advantages of physiotherapy or physical factors and referring the definite indications of their prescription. It is thought that multidisciplinary rehabilitation is most effective in management of chronic low back pain. The positive results depend on the experience of a physician and other rehabilitation specialists. A patient's motivation to participate in the process of pain control is very important. It is recommended to inform a patient about the effectiveness of administered methods. There is a lack of evidence-based trials evaluating the effectiveness of nonpharmacological methods of pain control in Lithuania. Therefore, the greater attention of researchers and administrative structures of health care should be given to this problem in order to develop the evidence-based guidelines for an effective correction of low back pain.

  20. Loop expansion around the Bethe approximation through the M-layer construction

    NASA Astrophysics Data System (ADS)

    Altieri, Ada; Chiara Angelini, Maria; Lucibello, Carlo; Parisi, Giorgio; Ricci-Tersenghi, Federico; Rizzo, Tommaso

    2017-11-01

    For every physical model defined on a generic graph or factor graph, the Bethe M-layer construction allows building a different model for which the Bethe approximation is exact in the large M limit, and coincides with the original model for M=1 . The 1/M perturbative series is then expressed by a diagrammatic loop expansion in terms of so-called fat diagrams. Our motivation is to study some important second-order phase transitions that do exist on the Bethe lattice, but are either qualitatively different or absent in the corresponding fully connected case. In this case, the standard approach based on a perturbative expansion around the naive mean field theory (essentially a fully connected model) fails. On physical grounds, we expect that when the construction is applied to a lattice in finite dimension there is a small region of the external parameters, close to the Bethe critical point, where strong deviations from mean-field behavior will be observed. In this region, the 1/M expansion for the corrections diverges, and can be the starting point for determining the correct non-mean-field critical exponents using renormalization group arguments. In the end, we will show that the critical series for the generic observable can be expressed as a sum of Feynman diagrams with the same numerical prefactors of field theories. However, the contribution of a given diagram is not evaluated by associating Gaussian propagators to its lines, as in field theories: one has to consider the graph as a portion of the original lattice, replacing the internal lines with appropriate one-dimensional chains, and attaching to the internal points the appropriate number of infinite-size Bethe trees to restore the correct local connectivity of the original model. The actual contribution of each (fat) diagram is the so-called line-connected observable, which also includes contributions from sub-diagrams with appropriate prefactors. In order to compute the corrections near to the critical point, Feynman diagrams (with their symmetry factors) can be read directly from the appropriate field-theoretical literature; the computation of momentum integrals is also quite similar; the extra work consists of computing the line-connected observable of the associated fat diagram in the limit of all lines becoming infinitely long.

  1. Development of an Analysis and Design Optimization Framework for Marine Propellers

    NASA Astrophysics Data System (ADS)

    Tamhane, Ashish C.

    In this thesis, a framework for the analysis and design optimization of ship propellers is developed. This framework can be utilized as an efficient synthesis tool in order to determine the main geometric characteristics of the propeller but also to provide the designer with the capability to optimize the shape of the blade sections based on their specific criteria. A hybrid lifting-line method with lifting-surface corrections to account for the three-dimensional flow effects has been developed. The prediction of the correction factors is achieved using Artificial Neural Networks and Support Vector Regression. This approach results in increased approximation accuracy compared to existing methods and allows for extrapolation of the correction factor values. The effect of viscosity is implemented in the framework via the coupling of the lifting line method with the open-source RANSE solver OpenFOAM for the calculation of lift, drag and pressure distribution on the blade sections using a transition kappa-o SST turbulence model. Case studies of benchmark high-speed propulsors are utilized in order to validate the proposed framework for propeller operation in open-water conditions but also in a ship's wake.

  2. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  3. SU-G-TeP1-08: LINAC Head Geometry Modeling for Cyber Knife System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, B; Li, Y; Liu, B

    Purpose: Knowledge of the LINAC head information is critical for model based dose calculation algorithms. However, the geometries are difficult to measure precisely. The purpose of this study is to develop linac head models for Cyber Knife system (CKS). Methods: For CKS, the commissioning data were measured in water at 800mm SAD. The measured full width at half maximum (FWHM) for each cone was found greater than the nominal value, this was further confirmed by additional film measurement in air. Diameter correction, cone shift and source shift models (DCM, CSM and SSM) are proposed to account for the differences. Inmore » DCM, a cone-specific correction is applied. For CSM and SSM, a single shift is applied to the cone or source physical position. All three models were validated with an in-house developed pencil beam dose calculation algorithm, and further evaluated by the collimator scatter factor (Sc) correction. Results: The mean square error (MSE) between nominal diameter and the FWHM derived from commissioning data and in-air measurement are 0.54mm and 0.44mm, with the discrepancy increasing with cone size. Optimal shift for CSM and SSM is found to be 9mm upward and 18mm downward, respectively. The MSE in FWHM is reduced to 0.04mm and 0.14mm for DCM and CSM (SSM). Both DCM and CSM result in the same set of Sc values. Combining all cones at SAD 600–1000mm, the average deviation from 1 in Sc of DCM (CSM) and SSM is 2.6% and 2.2%, and reduced to 0.9% and 0.7% for the cones with diameter greater than 15mm. Conclusion: We developed three geometrical models for CKS. All models can handle the discrepancy between vendor specifications and commissioning data. And SSM has the best performance for Sc correction. The study also validated that a point source can be used in CKS dose calculation algorithms.« less

  4. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  5. Resolving an ostensible inconsistency in calculating the evaporation rate of sessile drops.

    PubMed

    Chini, S F; Amirfazli, A

    2017-05-01

    This paper resolves an ostensible inconsistency in the literature in calculating the evaporation rate for sessile drops in a quiescent environment. The earlier models in the literature have shown that adapting the evaporation flux model for a suspended spherical drop to calculate the evaporation rate of a sessile drop needs a correction factor; the correction factor was shown to be a function of the drop contact angle, i.e. f(θ). However, there seemed to be a problem as none of the earlier models explicitly or implicitly mentioned the evaporation flux variations along the surface of a sessile drop. The more recent evaporation models include this variation using an electrostatic analogy, i.e. the Laplace equation (steady-state continuity) in a domain with a known boundary condition value, or known as the Dirichlet problem for Laplace's equation. The challenge is that the calculated evaporation rates using the earlier models seemed to differ from that of the recent models (note both types of models were validated in the literature by experiments). We have reinvestigated the recent models and found that the mathematical simplifications in solving the Dirichlet problem in toroidal coordinates have created the inconsistency. We also proposed a closed form approximation for f(θ) which is valid in a wide range, i.e. 8°≤θ≤131°. Using the proposed model in this study, theoretically, it was shown that the evaporation rate in the CWA (constant wetted area) mode is faster than the evaporation rate in the CCA (constant contact angle) mode for a sessile drop. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  7. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  8. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  9. Exact moments of the Sachdev-Ye-Kitaev model up to order 1 /N 2

    NASA Astrophysics Data System (ADS)

    García-García, Antonio M.; Jia, Yiyang; Verbaarschot, Jacobus J. M.

    2018-04-01

    We analytically evaluate the moments of the spectral density of the q-body Sachdev-Ye-Kitaev (SYK) model, and obtain order 1 /N 2 corrections for all moments, where N is the total number of Majorana fermions. To order 1 /N, moments are given by those of the weight function of the Q-Hermite polynomials. Representing Wick contractions by rooted chord diagrams, we show that the 1 /N 2 correction for each chord diagram is proportional to the number of triangular loops of the corresponding intersection graph, with an extra grading factor when q is odd. Therefore the problem of finding 1 /N 2 corrections is mapped to a triangle counting problem. Since the total number of triangles is a purely graph-theoretic property, we can compute them for the q = 1 and q = 2 SYK models, where the exact moments can be obtained analytically using other methods, and therefore we have solved the moment problem for any q to 1 /N 2 accuracy. The moments are then used to obtain the spectral density of the SYK model to order 1 /N 2. We also obtain an exact analytical result for all contraction diagrams contributing to the moments, which can be evaluated up to eighth order. This shows that the Q-Hermite approximation is accurate even for small values of N.

  10. Generalized Effective Medium Theory for Particulate Nanocomposite Materials

    PubMed Central

    Siddiqui, Muhammad Usama; Arif, Abul Fazal M.

    2016-01-01

    The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites. PMID:28773817

  11. The Relationship between Incentives to Learn and Maslow's Hierarchy of Needs

    NASA Astrophysics Data System (ADS)

    Wu, Wenling

    This paper empirically explores lots of students in college for their hierarchy of needs and incentives to learn, and finds the linear relationship between them. With the survey, it's be found that there are some kinds of factors influence the students needs order. The paper gives several diagrams to show these important factors which affect the college students' hierarchy of needs most. The paper also finds the change of the student' hierarchy of needs will affect the variety of incentives to learn. Then the paper develops a model for qualitative analyze this relationship. Numerical examples are used to demonstrate the performance of the model. With this model the correct and useful methods can be easily selected for students to incentive according to their types of hierarchy of needs.

  12. Consideration of Real World Factors Influencing Greenhouse ...

    EPA Pesticide Factsheets

    Discuss a variety of factors that influence the simulated fuel economy and GHG emissions that are often overlooked and updates made to ALPHA based on actual benchmarking data observed across a range of vehicles and transmissions. ALPHA model calibration is also examined, focusing on developing generic calibrations for driver behavior, transmission gear selection and torque converter lockup. In addition, show the derivation of correction factors needed to estimate cold start emission results. To provide an overview of the ALPHA tool with additional focus on recent updates by presenting the approach for validating and calibrating ALPHA to match particular vehicles in a general sense, then by looking at the individual losses, and calibration factors likely to influence fuel economy.

  13. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

  14. A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology

    NASA Astrophysics Data System (ADS)

    March, Marisa Cristina

    2018-01-01

    A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.

  15. Calibrating MMS Electron Drift Instrument (EDI) Ambient Electron Flux Measurements and Characterizing 3D Electric Field Signatures of Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Shuster, J. R.; Torbert, R. B.; Vaith, H.; Argall, M. R.; Li, G.; Chen, L. J.; Ergun, R. E.; Lindqvist, P. A.; Marklund, G. T.; Khotyaintsev, Y. V.; Russell, C. T.; Magnes, W.; Le Contel, O.; Pollock, C. J.; Giles, B. L.

    2015-12-01

    The electron drift instruments (EDIs) onboard each MMS spacecraft are designed with large geometric factors (~0.01cm2 str) to facilitate detection of weak (~100 nA) electron beams fired and received by the two gun-detector units (GDUs) when EDI is in its "electric field mode" to determine the local electric and magnetic fields. A consequence of the large geometric factor is that "ambient mode" electron flux measurements (500 eV electrons having 0°, 90°, or 180° pitch angle) can vary depending on the orientation of the EDI instrument with respect to the magnetic field, a nonphysical effect that requires a correction. Here, we present determinations of the θ- and ø-dependent correction factors for the eight EDI GDUs, where θ (ø) is the polar (azimuthal) angle between the GDU symmetry axis and the local magnetic field direction, and compare the corrected fluxes with those measured by the fast plasma instrument (FPI). Using these corrected, high time resolution (~1,000 samples per second) ambient electron fluxes, combined with the unprecedentedly high resolution 3D electric field measurements taken by the spin-plane and axial double probes (SDP and ADP), we are equipped to accurately detect electron-scale current layers and electric field waves associated with the non-Maxwellian (anisotropic and agyrotropic) particle distribution functions predicted to exist in the reconnection diffusion region. We compare initial observations of the diffusion region with distributions and wave analysis from PIC simulations of asymmetric reconnection applicable for modeling reconnection at the Earth's magnetopause, where MMS will begin Science Phase 1 as of September 1, 2015.

  16. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  17. Method of absorbance correction in a spectroscopic heating value sensor

    DOEpatents

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  18. Prophylactic ranitidine treatment in critically ill children – a population pharmacokinetic study

    PubMed Central

    Hawwa, Ahmed F; Westwood, Paul M; Collier, Paul S; Millership, Jeffrey S; Yakkundi, Shirish; Thurley, Gillian; Shields, Mike D; Nunn, Anthony J; Halliday, Henry L; McElnay, James C

    2013-01-01

    Aims To characterize the population pharmacokinetics of ranitidine in critically ill children and to determine the influence of various clinical and demographic factors on its disposition. Methods Data were collected prospectively from 78 paediatric patients (n = 248 plasma samples) who received oral or intravenous ranitidine for prophylaxis against stress ulcers, gastrointestinal bleeding or the treatment of gastro-oesophageal reflux. Plasma samples were analysed using high-performance liquid chromatography, and the data were subjected to population pharmacokinetic analysis using nonlinear mixed-effects modelling. Results A one-compartment model best described the plasma concentration profile, with an exponential structure for interindividual errors and a proportional structure for intra-individual error. After backward stepwise elimination, the final model showed a significant decrease in objective function value (−12.618; P < 0.001) compared with the weight-corrected base model. Final parameter estimates for the population were 32.1 l h−1 for total clearance and 285 l for volume of distribution, both allometrically modelled for a 70 kg adult. Final estimates for absorption rate constant and bioavailability were 1.31 h−1 and 27.5%, respectively. No significant relationship was found between age and weight-corrected ranitidine pharmacokinetic parameters in the final model, with the covariate for cardiac failure or surgery being shown to reduce clearance significantly by a factor of 0.46. Conclusions Currently, ranitidine dose recommendations are based on children's weights. However, our findings suggest that a dosing scheme that takes into consideration both weight and cardiac failure/surgery would be more appropriate in order to avoid administration of higher or more frequent doses than necessary. PMID:23016949

  19. Genetic and environmental (physical fitness and sedentary activity) interaction effects on cardiometabolic risk factors in Mexican American children and adolescents.

    PubMed

    Arya, Rector; Farook, Vidya S; Fowler, Sharon P; Puppala, Sobha; Chittoor, Geetha; Resendez, Roy G; Mummidi, Srinivas; Vanamala, Jairam; Almasy, Laura; Curran, Joanne E; Comuzzie, Anthony G; Lehman, Donna M; Jenkinson, Christopher P; Lynch, Jane L; DeFronzo, Ralph A; Blangero, John; Hale, Daniel E; Duggirala, Ravindranath; Diego, Vincent P

    2018-06-01

    Knowledge on genetic and environmental (G × E) interaction effects on cardiometabolic risk factors (CMRFs) in children is limited.  The purpose of this study was to examine the impact of G × E interaction effects on CMRFs in Mexican American (MA) children (n = 617, ages 6-17 years). The environments examined were sedentary activity (SA), assessed by recalls from "yesterday" (SAy) and "usually" (SAu) and physical fitness (PF) assessed by Harvard PF scores (HPFS). CMRF data included body mass index (BMI), waist circumference (WC), fat mass (FM), fasting insulin (FI), homeostasis model of assessment-insulin resistance (HOMA-IR), high-density lipoprotein cholesterol (HDL-C), triglycerides (TG), systolic (SBP) and diastolic (DBP) blood pressure, and number of metabolic syndrome components (MSC). We examined potential G × E interaction in the phenotypic expression of CMRFs using variance component models and likelihood-based statistical inference. Significant G × SA interactions were identified for six CMRFs: BMI, WC, FI, HOMA-IR, MSC, and HDL, and significant G × HPFS interactions were observed for four CMRFs: BMI, WC, FM, and HOMA-IR. However, after correcting for multiple hypothesis testing, only WC × SAy, FM × SAy, and FI × SAu interactions became marginally significant. After correcting for multiple testing, most of CMRFs exhibited significant G × E interactions (Reduced G × E model vs. Constrained model). These findings provide evidence that genetic factors interact with SA and PF to influence variation in CMRFs, and underscore the need for better understanding of these relationships to develop strategies and interventions to effectively reduce or prevent cardiometabolic risk in children. © 2018 WILEY PERIODICALS, INC.

  20. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  1. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO 2 enrichment experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.

    Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less

  2. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO 2 enrichment experiment

    DOE PAGES

    De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.; ...

    2017-02-01

    Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less

  3. Analysis of U.S. household wood energy consumption: 1967-2009

    Treesearch

    Nianfu Song; Francisco X. Aguilar; Stephen R. Shifley; Michael E. Goerndt

    2012-01-01

    The residential sector consumes about 23% of the energy derived from wood (wood energy) in the U.S. An estimated error correction model with data from 1967 to 2009 suggests that residential wood energy consumption has declined by an average 3% per year in response to technological progress, urbanization, accessibility of non-wood energy, and other factors associated...

  4. Gravity versus radiation models: on the importance of scale and heterogeneity in commuting flows.

    PubMed

    Masucci, A Paolo; Serras, Joan; Johansson, Anders; Batty, Michael

    2013-08-01

    We test the recently introduced radiation model against the gravity model for the system composed of England and Wales, both for commuting patterns and for public transportation flows. The analysis is performed both at macroscopic scales, i.e., at the national scale, and at microscopic scales, i.e., at the city level. It is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities. We then generalize the radiation model, introducing the correct normalization factor for finite systems. We show that even if the gravity model has a better overall performance the parameter-free radiation model gives competitive results, especially for large scales.

  5. S-NPP VIIRS thermal emissive band gain correction during the blackbody warm-up-cool-down cycle

    NASA Astrophysics Data System (ADS)

    Choi, Taeyoung J.; Cao, Changyong; Weng, Fuzhong

    2016-09-01

    The Suomi National Polar orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has onboard calibrators called blackbody (BB) and Space View (SV) for Thermal Emissive Band (TEB) radiometric calibration. In normal operation, the BB temperature is set to 292.5 K providing one radiance level. From the NOAA's Integrated Calibration and Validation System (ICVS) monitoring system, the TEB calibration factors (F-factors) have been trended and show very stable responses, however the BB Warm-Up-Cool-Down (WUCD) cycles provide detectors' gain and temperature dependent sensitivity measurements. Since the launch of S-NPP, the NOAA Sea Surface Temperature (SST) group noticed unexpected global SST anomalies during the WUCD cycles. In this study, the TEB Ffactors are calculated during the WUCD cycle on June 17th 2015. The TEB F-factors are analyzed by identifying the VIIRS On-Board Calibrator Intermediate Product (OBCIP) files to be Warm-Up or Cool-Down granules. To correct the SST anomaly, an F-factor correction parameter is calculated by the modified C1 (or b1) values which are derived from the linear portion of C1 coefficient during the WUCD. The F-factor correction factors are applied back to the original VIIRS SST bands showing significantly reducing the F-factor changes. Obvious improvements are observed in M12, M14 and M16, but corrections effects are hardly seen in M16. Further investigation is needed to find out the source of the F-factor oscillations during the WUCD.

  6. S-factor for radiative capture reactions for light nuclei at astrophysical energies

    NASA Astrophysics Data System (ADS)

    Ghasemi, Reza; Sadeghi, Hossein

    2018-06-01

    The astrophysical S-factors of thermonuclear reactions, including radiative capture reactions and their analysis in the frame of different theoretical models, are the main source of nuclear processes. We have done research on the radiative capture reactions importance in the framework of a potential model. Investigation of the reactions in the astrophysical energies is of great interest in the aspect of astrophysics and nuclear physics for developing correct models of burning and evolution of stars. The experimental measurements are very difficult and impossible because of these reactions occurrence at low-energies. In this paper we do a calculation on radiative capture astrophysical S-factors for nuclei in the mass region A < 17. We calculate the astrophysical factor for the dipole electronic transition E1 and magnetic dipole transition M1 and electric quadrupole transition E2 by using the M3Y potential for non-resonances and resonances captures. Then we have got the parameter of a central part and spin-orbit part of M3Y potential and spectroscopic factor for reaction channels. For the astrophysical S-factor of this article the good agreement is achieved In comparison with experimental data and other theoretical methods.

  7. The effects of deterioration and technological levels on pollutant emission factors for gasoline light-duty trucks.

    PubMed

    Zhang, Qingyu; Fan, Juwang; Yang, Weidong; Chen, Bixin; Zhang, Lijuan; Liu, Jiaoyu; Wang, Jingling; Zhou, Chunyao; Chen, Xuan

    2017-07-01

    Vehicle deterioration and technological change influence emission factors (EFs). In this study, the impacts of vehicle deterioration and emission standards on EFs of regulated pollutants (carbon monoxide [CO], hydrocarbon [HC], and nitrogen oxides [NO x ]) for gasoline light-duty trucks (LDTs) were investigated according to the inspection and maintenance (I/M) data using a chassis dynamometer method. Pollutant EFs for LDTs markedly varied with accumulated mileages and emission standards, and the trends of EFs are associated with accumulated mileages. In addition, the study also found that in most cases, the median EFs of CO, HC, and NO x are higher than those of basic EFs in the International Vehicle Emissions (IVE) model; therefore, the present study provides correction factors for the IVE model relative to the corresponding emission standards and mileages. Currently, vehicle emissions are great contributors to air pollution in cities, especially in developing countries. Emission factors play a key role in creating emission inventory and estimating emissions. Deterioration represented by vehicle age and accumulated mileage and changes of emission standards markedly influence emission factors. In addition, the results provide collection factors for implication in the IVE model in the region levels.

  8. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    NASA Astrophysics Data System (ADS)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  9. Factors affecting the outcome of excimer laser photorefractive keratectomy: a preliminary multivariable regression analysis

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Papaioannou, Thanassis; Nesburn, Anthony B.; Salz, James J.; Warren, Cathy; Grundfest, Warren S.

    1996-05-01

    Multivariable regression analysis was used to evaluate the combined effects of some preoperative and operative variables on the change of refraction following excimer laser photorefractive keratectomy for myopia (PRK). This analysis was performed on 152 eyes (at 6 months postoperatively) and 156 eyes (at 12 months postoperatively). The following variables were considered: intended refractive correction, patient age, treatment zone, central corneal thickness, average corneal curvature, and intraocular pressure. At 6 months after surgery, the cumulative R2 was 0.43 with 0.38 attributed to the intended correction and 0.06 attributed to the preoperative corneal curvature. At 12 months, the cumulative R2 was 0.37 where 0.33 was attributed to the intended correction, 0.02 to the preoperative corneal curvature, and 0.01 to both preoperative corneal thickness and to the patient age. Further model augmentation is necessary to account for the remaining variability and the behavior of the residuals.

  10. Higgs boson decay into b-quarks at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán

    2015-04-01

    We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.

  11. Extracting Baseline Electricity Usage Using Gradient Tree Boosting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Taehoon; Lee, Dongeun; Choi, Jaesik

    To understand how specific interventions affect a process observed over time, we need to control for the other factors that influence outcomes. Such a model that captures all factors other than the one of interest is generally known as a baseline. In our study of how different pricing schemes affect residential electricity consumption, the baseline would need to capture the impact of outdoor temperature along with many other factors. In this work, we examine a number of different data mining techniques and demonstrate Gradient Tree Boosting (GTB) to be an effective method to build the baseline. We train GTB onmore » data prior to the introduction of new pricing schemes, and apply the known temperature following the introduction of new pricing schemes to predict electricity usage with the expected temperature correction. Our experiments and analyses show that the baseline models generated by GTB capture the core characteristics over the two years with the new pricing schemes. In contrast to the majority of regression based techniques which fail to capture the lag between the peak of daily temperature and the peak of electricity usage, the GTB generated baselines are able to correctly capture the delay between the temperature peak and the electricity peak. Furthermore, subtracting this temperature-adjusted baseline from the observed electricity usage, we find that the resulting values are more amenable to interpretation, which demonstrates that the temperature-adjusted baseline is indeed effective.« less

  12. Quasiparticle properties at microwave frequencies in the underdoped YBa2Cu3O7-δ thin films

    NASA Astrophysics Data System (ADS)

    Hsing, Lai

    2004-03-01

    Microstrip ring resonators with quality factor (Q) over 10^4 at temperature 5 K were fabricated using the double-side YBa_2Cu_3O_7-δ (YBCO) films deposited on LaAlO3 (LAO) substrates. By placing a narrow gap in the ring resonator, the original fundamental resonating mode (3.61 GHz) splits into two modes (1.80 GHz and 5.33 GHz) with distinct resonating frequencies. The samples allow us to determine the temperature and the frequency dependences of penetration depth and microwave conductivity for various underdoped-cuprates by using Drude formula and the modified two-fluid model. The natures of the order parameter of high-Tc superconductivity in the underdoped cases are shown to be of d-wave type in an exact manner. In particular, the Fermi-liquid correction factor α ^2 and the vertex correction factor β from the model, proposed by Wen and Lee, can be estimated that α ^2 is doping independent in the underdoped regime and β decreases as oxygen content is decreasing in our experiment data. All these results are independent of frequencies as well. The results reveal that the interaction between quasiparticles is insensitive dependence of the impurity concentrations due to oxygen deficiency on the CuO chain and the impurity potential for forward scattering approaches the same as back scattering with more oxygen deficiency.

  13. On Complexities of Impact Simulation of Fiber Reinforced Polymer Composites: A Simplified Modeling Framework

    PubMed Central

    Alemi-Ardakani, M.; Milani, A. S.; Yannacopoulos, S.

    2014-01-01

    Impact modeling of fiber reinforced polymer composites is a complex and challenging task, in particular for practitioners with less experience in advanced coding and user-defined subroutines. Different numerical algorithms have been developed over the past decades for impact modeling of composites, yet a considerable gap often exists between predicted and experimental observations. In this paper, after a review of reported sources of complexities in impact modeling of fiber reinforced polymer composites, two simplified approaches are presented for fast simulation of out-of-plane impact response of these materials considering four main effects: (a) strain rate dependency of the mechanical properties, (b) difference between tensile and flexural bending responses, (c) delamination, and (d) the geometry of fixture (clamping conditions). In the first approach, it is shown that by applying correction factors to the quasistatic material properties, which are often readily available from material datasheets, the role of these four sources in modeling impact response of a given composite may be accounted for. As a result a rough estimation of the dynamic force response of the composite can be attained. To show the application of the approach, a twill woven polypropylene/glass reinforced thermoplastic composite laminate has been tested under 200 J impact energy and was modeled in Abaqus/Explicit via the built-in Hashin damage criteria. X-ray microtomography was used to investigate the presence of delamination inside the impacted sample. Finally, as a second and much simpler modeling approach it is shown that applying only a single correction factor over all material properties at once can still yield a reasonable prediction. Both advantages and limitations of the simplified modeling framework are addressed in the performed case study. PMID:25431787

  14. Experimental setup for the determination of the correction factors of the neutron doseratemeters in fast neutron fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Elena; Bercea, Sorin; Dudu, Dorin

    2013-12-16

    The use of the U-120 Cyclotron of the IFIN-HH allowed to perform a testing bench with fast neutrons in order to determine the correction factors of the doseratemeters dedicated to neutron measurement. This paper deals with researchers performed in order to develop the irradiation facility testing the fast neutrons flux generated at the Cyclotron. This facility is presented, together with the results obtain in determining the correction factor for a doseratemeter dedicated to the neutron dose equivalent rate measurement.

  15. Groundwater Vulnerability Assessment of the Pingtung Plain in Southern Taiwan.

    PubMed

    Liang, Ching-Ping; Jang, Cheng-Shin; Liang, Cheng-Wei; Chen, Jui-Sheng

    2016-11-23

    In the Pingtung Plain of southern Taiwan, elevated levels of NO₃ - -N in groundwater have been reported. Therefore, efforts for assessing groundwater vulnerability are required as part of the critical steps to prevent and control groundwater pollution. This study makes a groundwater vulnerability assessment for the Pingtung Plain using an improved overlay and index-based DRASTIC model. The improvement of the DRASTIC model is achieved by reassigning the weighting coefficients of the factors in this model with the help of a discriminant analysis statistical method. The analytical results obtained from the improved DRASTIC model provide a reliable prediction for use in groundwater vulnerability assessment to nitrate pollution and can correctly identify the groundwater protection zones in the Pingtung Plain. Moreover, the results of the sensitivity analysis conducted for the seven parameters in the improved DRASTIC model demonstrate that the aquifer media (A) is the most sensitive factor when the nitrate-N concentration is below 2.5 mg/L. For the cases where the nitrate-N concentration is above 2.5 mg/L, the aquifer media (A) and net recharge (R) are the two most important factors.

  16. Slip Correction Measurements of Certified PSL Nanoparticles Using a Nanometer Differential Mobility Analyzer (Nano-DMA) for Knudsen Number From 0.5 to 83

    PubMed Central

    Kim, Jung Hyeun; Mulholland, George W.; Kukuck, Scott R.; Pui, David Y. H.

    2005-01-01

    The slip correction factor has been investigated at reduced pressures and high Knudsen number using polystyrene latex (PSL) particles. Nano-differential mobility analyzers (NDMA) were used in determining the slip correction factor by measuring the electrical mobility of 100.7 nm, 269 nm, and 19.90 nm particles as a function of pressure. The aerosol was generated via electrospray to avoid multiplets for the 19.90 nm particles and to reduce the contaminant residue on the particle surface. System pressure was varied down to 8.27 kPa, enabling slip correction measurements for Knudsen numbers as large as 83. A condensation particle counter was modified for low pressure application. The slip correction factor obtained for the three particle sizes is fitted well by the equation: C = 1 + Kn (α + β exp(−γ/Kn)), with α = 1.165, β = 0.483, and γ = 0.997. The first quantitative uncertainty analysis for slip correction measurements was carried out. The expanded relative uncertainty (95 % confidence interval) in measuring slip correction factor was about 2 % for the 100.7 nm SRM particles, about 3 % for the 19.90 nm PSL particles, and about 2.5 % for the 269 nm SRM particles. The major sources of uncertainty are the diameter of particles, the geometric constant associated with NDMA, and the voltage. PMID:27308102

  17. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  18. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  19. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  20. Mathematical modeling of erythrocyte chimerism informs genetic intervention strategies for sickle cell disease.

    PubMed

    Altrock, Philipp M; Brendel, Christian; Renella, Raffaele; Orkin, Stuart H; Williams, David A; Michor, Franziska

    2016-09-01

    Recent advances in gene therapy and genome-engineering technologies offer the opportunity to correct sickle cell disease (SCD), a heritable disorder caused by a point mutation in the β-globin gene. The developmental switch from fetal γ-globin to adult β-globin is governed in part by the transcription factor (TF) BCL11A. This TF has been proposed as a therapeutic target for reactivation of γ-globin and concomitant reduction of β-sickle globin. In this and other approaches, genetic alteration of a portion of the hematopoietic stem cell (HSC) compartment leads to a mixture of sickling and corrected red blood cells (RBCs) in periphery. To reverse the sickling phenotype, a certain proportion of corrected RBCs is necessary; the degree of HSC alteration required to achieve a desired fraction of corrected RBCs remains unknown. To address this issue, we developed a mathematical model describing aging and survival of sickle-susceptible and normal RBCs; the former can have a selective survival advantage leading to their overrepresentation. We identified the level of bone marrow chimerism required for successful stem cell-based gene therapies in SCD. Our findings were further informed using an experimental mouse model, where we transplanted mixtures of Berkeley SCD and normal murine bone marrow cells to establish chimeric grafts in murine hosts. Our integrative theoretical and experimental approach identifies the target frequency of HSC alterations required for effective treatment of sickling syndromes in humans. Our work replaces episodic observations of such target frequencies with a mathematical modeling framework that covers a large and continuous spectrum of chimerism conditions. Am. J. Hematol. 91:931-937, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Altitude exposure in sports: the Athlete Biological Passport standpoint.

    PubMed

    Sanchis-Gomar, Fabian; Pareja-Galeano, Helios; Brioche, Thomas; Martinez-Bello, Vladimir; Lippi, Giuseppe

    2014-03-01

    The Athlete Biological Passport (ABP) is principally founded on monitoring an athlete's biological variables over time, to identify abnormal biases on a longitudinal basis. Several factors are known to influence the results of these markers. However, the manner in which the altitude factor is taken into account still needs to be standardized. Causal relationships between haematological variables should be correctly integrated into ABP software. In particular, modifications of haematological parameters during and after exposure to different altitudes/hypoxic protocols need to be properly included within detection models. Copyright © 2013 John Wiley & Sons, Ltd.

  2. A comparison of quality of present-day heat flow obtained from BHTs, Horner Plots of Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waples, D.W.; Mahadir, R.

    1994-07-01

    Reconciling temperature data obtained from measurement of single BHT, multiple BHT at a single depth, RFTs, and DSTs, is very difficult. Quality of data varied widely, however DST data were assumed to be most reliable. Data from 87 wells was used in this study, but only 47 wells have DST data. BASINMOD program was used to calculate the present-day heat flow, using measured thermal conductivity and calibrated against the DST data. The heat flows obtained from the DST data were assumed to be correct and representative throughout the basin. Then, heat flows using (1) uncorrected RFT data, (2) multiple BHTmore » data corrected by the Horner plot method, and (3) single BHT values corrected upward by a standard 10% were calculated. All of these three heat-flow populations had identically standard deviations to that for the DST data, but with significantly lower mean values. Correction factors were calculated to give each of the three erroneous populations the same mean value as the DST population. Heat flows calculated from RFT data had to be corrected upward by a factor of 1.12 to be equivalent to DST data; Horner plot data corrected by a factor of 1.18, and single BHT data by a factor of 1.2. These results suggest that present-day subsurface temperatures using RFT, Horner plot, and BHT data are considerably lower than they should be. The authors suspect qualitatively similar results would be found in other areas. Hence, they recommend significant corrections be routinely made until local calibration factors are established.« less

  3. Isolating Curvature Effects in Computing Wall-Bounded Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2001-01-01

    The flow over the zero-pressure-gradient So-Mellor convex curved wall is simulated using the Navier-Stokes equations. An inviscid effective outer wall shape, undocumented in the experiment, is obtained by using an adjoint optimization method with the desired pressure distribution on the inner wall as the cost function. Using this wall shape with a Navier-Stokes method, the abilities of various turbulence models to simulate the effects of curvature without the complicating factor of streamwise pressure gradient can be evaluated. The one-equation Spalart-Allmaras turbulence model overpredicts eddy viscosity, and its boundary layer profiles are too full. A curvature-corrected version of this model improves results, which are sensitive to the choice of a particular constant. An explicit algebraic stress model does a reasonable job predicting this flow field. However, results can be slightly improved by modifying the assumption on anisotropy equilibrium in the model's derivation. The resulting curvature-corrected explicit algebraic stress model possesses no heuristic functions or additional constants. It lowers slightly the computed skin friction coefficient and the turbulent stress levels for this case (in better agreement with experiment), but the effect on computed velocity profiles is very small.

  4. Flavor structure of the nucleon electromagnetic form factors and transverse charge densities in the chiral quark-soliton model

    NASA Astrophysics Data System (ADS)

    Silva, António; Urbano, Diana; Kim, Hyun-Chul

    2018-02-01

    We investigate the flavor decomposition of the electromagnetic form factors of the nucleon, based on the chiral quark-soliton model (χQSM) with symmetry-conserving quantization. We consider the rotational 1/N_c and linear strange-quark mass (ms) corrections. We discuss the results of the flavor-decomposed electromagnetic form factors in comparison with the recent experimental data. In order to see the effects of the strange quark, we compare the SU(3) results with those of SU(2). Finally, we discuss the transverse charge densities for both unpolarized and polarized nucleons. The transverse charge density inside a neutron turns out to be negative in the vicinity of the center within the SU(3) χQSM, which can be explained by the contribution of the strange quark.

  5. Heavy quark form factors at two loops

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; Falcioni, G.; De Freitas, A.; Marquard, P.; Rana, N.; Schneider, C.

    2018-05-01

    We compute the two-loop QCD corrections to the heavy quark form factors in the case of the vector, axial-vector, scalar and pseudoscalar currents up to second order in the dimensional parameter ɛ =(4 -D )/2 . These terms are required in the renormalization of the higher-order corrections to these form factors.

  6. Thermal and Compositional Variation of Glassy Metal Structure Factors.

    NASA Astrophysics Data System (ADS)

    From, Milton

    The x-ray total structure factor of the glassy -metal alloys Mg_{70}Zn_ {30}, Ca_{70}Mg_{30 } and Mg_{85.5}Cu _{14.5} has been measured at three temperatures: 9K, 150K, and 300K. The data have a statistical precision of about.8% and an absolute accuracy of roughly 3%. Percus-Yevick hard sphere structure factors may be fitted quite accurately to the data in the region of the first peak. In addition, the variation of the experimental structure factor with composition is found to be consistent with the Percus-Yevick theory. At low k values, Percus -Yevick and other theoretical model structure factors are in poor agreement with the data. Within experimental error, the temperature dependence of the structure factors is in agreement with the Debye plane wave phonon model of atomic vibrations. The measured structure factors are used to calculate the electrical resistivity from the Faber-Ziman equation. In most cases, the calculations yield both the correct magnitude of resistivity and sign of the temperature coefficient of resistivity.

  7. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Statistical models to predict type 2 diabetes remission after bariatric surgery.

    PubMed

    Ramos-Levi, Ana M; Matia, Pilar; Cabrerizo, Lucio; Barabash, Ana; Sanchez-Pernaute, Andres; Calle-Pascual, Alfonso L; Torres, Antonio J; Rubio, Miguel A

    2014-09-01

    Type 2 diabetes (T2D) remission may be achieved after bariatric surgery (BS), but rates vary according to patients' baseline characteristics. The present study evaluates the relevance of several preoperative factors and develops statistical models to predict T2D remission 1 year after BS. We retrospectively studied 141 patients (57.4% women), with a preoperative diagnosis of T2D, who underwent BS in a single center (2006-2011). Anthropometric and glucose metabolism parameters before surgery and at 1-year follow-up were recorded. Remission of T2D was defined according to consensus criteria: HbA1c <6%, fasting glucose (FG) <100 mg/dL, absence of pharmacologic treatment. The influence of several preoperative factors was explored and different statistical models to predict T2D remission were elaborated using logistic regression analysis. Three preoperative characteristics considered individually were identified as the most powerful predictors of T2D remission: C-peptide (R2  = 0.249; odds ratio [OR] 1.652, 95% confidence interval [CI] 1.181-2.309; P = 0.003), T2D duration (R2  = 0.197; OR 0.869, 95% CI 0.808-0.935; P < 0.001), and previous insulin therapy (R2  = 0.165; OR 4.670, 95% CI 2.257-9.665; P < 0.001). High C-peptide levels, a shorter duration of T2D, and the absence of insulin therapy favored remission. Different multivariate logistic regression models were designed. When considering sex, T2D duration, and insulin treatment, remission was correctly predicted in 72.4% of cases. The model that included age, FG and C-peptide levels resulted in 83.7% correct classifications. When sex, FG, C-peptide, insulin treatment, and percentage weight loss were considered, correct classification of T2D remission was achieved in 95.9% of cases. Preoperative characteristics determine T2D remission rates after BS to different extents. The use of statistical models may help clinicians reliably predict T2D remission rates after BS. © 2014 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.

  9. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  10. Brief communication: Drought likelihood for East Africa

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Huntingford, Chris

    2018-02-01

    The East Africa drought in autumn of year 2016 caused malnutrition, illness and death. Close to 16 million people across Somalia, Ethiopia and Kenya needed food, water and medical assistance. Many factors influence drought stress and response. However, inevitably the following question is asked: are elevated greenhouse gas concentrations altering extreme rainfall deficit frequency? We investigate this with general circulation models (GCMs). After GCM bias correction to match the climatological mean of the CHIRPS data-based rainfall product, climate models project small decreases in probability of drought with the same (or worse) severity as 2016 ASO (August to October) East African event. This is by the end of the 21st century compared to the probabilities for present day. However, when further adjusting the climatological variability of GCMs to also match CHIRPS data, by additionally bias-correcting for variance, then the probability of drought occurrence will increase slightly over the same period.

  11. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    NASA Astrophysics Data System (ADS)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  12. Effects of solvent composition in the normal-phase liquid chromatography of alkylphenols and naphthols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurtubise, R.J.; Hussain, A.; Silver, H.F.

    1981-11-01

    The normal-phase liquid chromatographic models of Scott, Snyder, and Soczewinski were considered for a ..mu..-Bondapak NH/sub 2/ stationary phase. n-Heptane:2-propanol and n-heptane:ethyl acetate mobile phases of different compositions were used. Linear relationships were obtained from graphs of log K' vs. log mole fraction of the strong solvent for both n-heptane:2-propanol and n-heptane:ethyl acetate mobile phases. A linear relationship was obtained between the reciprocal of corrected retention volume and % wt/v of 2-propanol but not between the reciprocal of corrected retention volume and % wt/v of ethyl acetate. The slopes and intercept terms from the Snyder and Soczewinski models were foundmore » to approximately describe interactions with ..mu..-Bondapak NH/sub 2/. Capacity factors can be predicted for the compounds by using the equations obtained from mobile phase composition variation experiments.« less

  13. A probabilistic model for deriving soil quality criteria based on secondary poisoning of top predators. I. Model description and uncertainty analysis.

    PubMed

    Traas, T P; Luttik, R; Jongbloed, R H

    1996-08-01

    In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.

  14. An entropy correction method for unsteady full potential flows with strong shocks

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.

    1986-01-01

    An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.

  15. Characterization of HPGe gamma spectrometric detectors systems for Instrumental Neutron Activation Analysis (INAA) at the Colombian Geological Survey

    NASA Astrophysics Data System (ADS)

    Sierra, O.; Parrado, G.; Cañón, Y.; Porras, A.; Alonso, D.; Herrera, D. C.; Peña, M.; Orozco, J.

    2016-07-01

    This paper presents the progress made by the Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey (SGC in its Spanish acronym), towards the characterization of its gamma spectrometric systems for Instrumental Neutron Activation Analysis (INAA), with the aim of introducing corrections to the measurements by variations in sample geometry. Characterization includes the empirical determination of the interaction point of gamma radiation inside the Germanium crystal, through the application of a linear model and the use of a fast Monte Carlo N-Particle (MCNP) software to estimate correction factors for differences in counting efficiency that arise from variations in sample density between samples and standards.

  16. Correction factors for self-selection when evaluating screening programmes.

    PubMed

    Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H

    2016-03-01

    In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.

  17. Dimensions of vegetable parenting practices among preschoolers.

    PubMed

    Baranowski, Tom; Chen, Tzu-An; O'Connor, Teresia; Hughes, Sheryl; Beltran, Alicia; Frankel, Leslie; Diep, Cassandra; Baranowski, Janice C

    2013-10-01

    The objective of this study was to determine the factor structure of 31 effective and ineffective vegetable parenting practices used by parents of preschool children based on three theoretically proposed factors: responsiveness, control and structure. The methods employed included both corrected item-total correlations and confirmatory factor analysis. Acceptable fit was obtained only when effective and ineffective parenting practices were analyzed separately. Among effective items the model included one second order factor (effectiveness) and the three proposed first order factors. The same structure was revealed among ineffective items, but required correlated paths be specified among items. A theoretically specified three factor structure was obtained among 31 vegetable parenting practice items, but likely to be effective and ineffective items had to be analyzed separately. Research is needed on how these parenting practices factors predict child vegetable intake. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  19. The relationship between tree growth patterns and likelihood of mortality: A study of two tree species in the Sierra Nevada

    USGS Publications Warehouse

    Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.

    2007-01-01

    We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.

  20. Terrestrial Gamma Radiation Dose Rate of West Sarawak

    NASA Astrophysics Data System (ADS)

    Izham, A.; Ramli, A. T.; Saridan Wan Hassan, W. M.; Idris, H. N.; Basri, N. A.

    2017-10-01

    A study of terrestrial gamma radiation (TGR) dose rate was conducted in west of Sarawak, covering Kuching, Samarahan, Serian, Sri Aman, and Betong divisions to construct a baseline TGR dose rate level data of the areas. The total area covered was 20,259.2 km2, where in-situ measurements of TGR dose rate were taken using NaI(Tl) scintillation detector Ludlum 19 micro R meter NaI(Tl) approximately 1 meter above ground level. Twenty-nine soil samples were taken across the 5 divisions covering 26 pairings of 9 geological formations and 7 soil types. A hyperpure Germanium detector was then used to find the samples' 238U, 232Th, and 40K radionuclides concentrations producing a correction factor Cf = 0.544. A total of239 measured data were corrected with Cf resulting in a mean Dm of 47 ± 1 nGy h-1, with a range between 5 nGy h-1 - 103 nGy h-1. A multiple regression analysis was conducted between geological means and soil types means against the corrected TGR dose rate Dm, generating Dg,s= 0.847Dg+ 0.637Ds- 22.313 prediction model with a normalized Beta equation of Dg,s= 0.605Dg+ 0.395Ds. The model has an 84.6% acceptance of Whitney- Mann test null hypothesis when tested against the corrected TGR dose rates.

  1. Improved atmospheric correction and chlorophyll-a remote sensing models for turbid waters in a dusty environment

    NASA Astrophysics Data System (ADS)

    Al Shehhi, Maryam R.; Gherboudj, Imen; Zhao, Jun; Ghedira, Hosni

    2017-11-01

    This study presents a comprehensive assessment of the performance of the commonly used atmospheric correction models (NIR, SWIR, NIR-SWIR and FM) and ocean color products (OC3 and OC2) derived from MODIS images over the Arabian Gulf, Sea of Oman, and Arabian Sea. The considered atmospheric correction models have been used to derive MODIS normalized water-leaving radiances (nLw), which are compared to in situ water nLw(λ) data collected at different locations by Masdar Institute, United Arab of Emirates, and from AERONET-OC (the ocean color component of the Aerosol Robotic Network) database. From this comparison, the NIR model has been found to be the best performing model among the considered atmospheric correction models, which in turn shows disparity, especially at short wavelengths (400-500 nm) under high aerosol optical depth conditions (AOT (869) > 0.3) and over turbid waters. To reduce the error induced by these factors, a modified model taking into consideration the atmospheric and water turbidity conditions has been proposed. A turbidity index was used to identify the turbid water and a threshold of AOT (869) = 0.3 was used to identify the dusty atmosphere. Despite improved results in the MODIS nLw(λ) using the proposed approach, Chl-a models (OC3 and OC2) show low performance when compared to the in situ Chl-a measurements collected during several field campaigns organized by local, regional and international organizations. This discrepancy might be caused by the improper parametrization of these models or/and the improper selection of bands. Thus, an adaptive power fit algorithm (R2 = 0.95) has been proposed to improve the estimation of Chl-a concentration from 0.07 to 10 mg/m3 by using a new blue/red MODIS band ratio of (443,488)/645 instead of the default band ratio used for OC3(443,488)/547. The selection of this new band ratio (443,488)/645 has been based on using band 645 nm which has been found to represent both water turbidity and algal absorption.

  2. Developing effective warning systems: Ongoing research at Ruapehu volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Leonard, Graham S.; Johnston, David M.; Paton, Douglas; Christianson, Amy; Becker, Julia; Keys, Harry

    2008-05-01

    PurposeThis paper examines the unique challenges to volcanic risk management associated with having a ski area on an active volcano. Using a series of simulated eruption/lahar events at Ruapehu volcano, New Zealand, as a context, a model of risk management that integrates warning system design and technology, risk perceptions and the human response is explored. Principal resultsDespite increases in the observed audibility and comprehension of the warning message, recall of public education content, and people's awareness of volcanic risk, a persistent minority of the public continued to demonstrate only moderate awareness of the correct actions to take during a warning and failed to respond effectively. A relationship between level of staff competence and correct public response allowed the level of public response to be used to identify residual risk and additional staff training needs. The quality of staff awareness, action and decision-making has emerged as a critical factor, from detailed staff and public interviews and from exercise observations. Staff actions are especially important for mobilising correct public response at Ruapehu ski areas due to the transient nature of the visitor population. Introduction of education material and staff training strategies that included the development of emergency decision-making competencies improved knowledge of correct actions, and increased the proportion of people moving out of harm's way during blind tests. Major conclusionsWarning effectiveness is a function of more than good hazard knowledge and the generation and notification of an early warning message. For warning systems to be effective, these factors must be complemented by accurate knowledge of risk and risk management actions. By combining the Ruapehu findings with those of other warning system studies in New Zealand, and internationally, a practical five-step model for effective early warning systems is discussed. These steps must be based upon sound and regularly updated underpinning science and be tied to formal effectiveness evaluation, which is fed back into system improvements. The model presented emphasises human considerations, the development of which arguably require even more effort than the hardware components of early warning systems.

  3. A comparison of methods for adjusting biomarkers of iron, zinc, and selenium status for the effect of inflammation in an older population: a case for interleukin 6.

    PubMed

    MacDonell, Sue O; Miller, Jody C; Harper, Michelle J; Reid, Malcolm R; Haszard, Jillian J; Gibson, Rosalind S; Houghton, Lisa A

    2018-05-14

    Older people are at risk of micronutrient deficiencies, which can be under- or overestimated in the presence of inflammation. Several methods have been proposed to adjust for the effect of inflammation; however, to our knowledge, none have been investigated in older adults in whom chronic inflammation is common. We investigated the influence of various inflammation-adjustment methods on micronutrient biomarkers associated with anemia in older people living in aged-care facilities in New Zealand. Blood samples were collected from 289 New Zealand aged-care residents aged >65 y. Serum ferritin, soluble transferrin receptor (sTfR), total body iron (TBI), plasma zinc, and selenium as well as the inflammatory markers high-sensitivity C-reactive protein (CRP), α1-acid glycoprotein (AGP), and interleukin 6 (IL-6) were measured. Four adjustment methods were applied to micronutrient concentrations: 1) internal correction factors based on stages of inflammation defined by CRP and AGP, 2) external correction factors derived from the literature, 3) a regression correction model in which reference CRP and AGP were set to the maximum of the lowest decile, and 4) a regression correction model in which reference IL-6 was set to the maximum of the lowest decile. Forty percent of participants had elevated concentrations of CRP, AGP, or both, and 37% of participants had higher than normal concentrations of IL-6. Adjusted geometric mean values for serum ferritin, sTfR, and TBI were significantly lower (P < 0.001), and plasma zinc and selenium were significantly higher (P < 0.001), than the unadjusted values regardless of the method applied. The greatest inflammation adjustment was observed with the regression correction that used IL-6. Subsequently, the prevalence of zinc and selenium deficiency decreased (-13% and -14%, respectively; P < 0.001), whereas iron deficiency remained unaffected. Adjustment for inflammation should be considered when evaluating micronutrient status in this aging population group; however, the approaches used require further investigation, particularly the influence of adjustment for IL-6.

  4. Performance prediction using geostatistics and window reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanilla, J.P.; Al-Khalawi, A.A.; Johnson, S.G.

    1995-11-01

    This paper is the first window model study in the northern area of a large carbonate reservoir in Saudi Arabia. It describes window reservoir simulation with geostatistics to model uneven water encroachment in the southwest producing area of the northern portion of the reservoir. In addition, this paper describes performance predictions that investigate the sweep efficiency of the current peripheral waterflood. A 50 x 50 x 549 (240 m. x 260 m. x 0.15 m. average grid block size) geological model was constructed with geostatistics software. Conditional simulation was used to obtain spatial distributions of porosity and volume of dolomite.more » Core data transforms were used to obtain horizontal and vertical permeability distributions. Simple averaging techniques were used to convert the 549-layer geological model to a 50 x 50 x 10 (240 m. x 260 m. x 8 m. average grid block size) window reservoir simulation model. Flux injectors and flux producers were assigned to the outermost grid blocks. Historical boundary flux rates were obtained from a coarsely-ridded full-field model. Pressure distribution, water cuts, GORs, and recent flowmeter data were history matched. Permeability correction factors and numerous parameter adjustments were required to obtain the final history match. The permeability correction factors were based on pressure transient permeability-thickness analyses. The prediction phase of the study evaluated the effects of infill drilling, the use of artificial lifts, workovers, horizontal wells, producing rate constraints, and tight zone development to formulate depletion strategies for the development of this area. The window model will also be used to investigate day-to-day reservoir management problems in this area.« less

  5. Radiative corrections to the quark masses in the ferromagnetic Ising and Potts field theories

    NASA Astrophysics Data System (ADS)

    Rutkevich, Sergei B.

    2017-10-01

    We consider the Ising Field Theory (IFT), and the 3-state Potts Field Theory (PFT), which describe the scaling limits of the two-dimensional lattice q-state Potts model with q = 2, and q = 3, respectively. At zero magnetic field h = 0, both field theories are integrable away from the critical point, have q degenerate vacua in the ferromagnetic phase, and q (q - 1) particles of the same mass - the kinks interpolating between two different vacua. Application of a weak magnetic field induces confinement of kinks into bound states - the "mesons" (for q = 2 , 3) consisting predominantly of two kinks, and "baryons" (for q = 3), which are essentially the three-kink excitations. The kinks in the confinement regime are also called "the quarks". We review and refine the Form Factor Perturbation Theory (FFPT), adapting it to the analysis of the confinement problem in the limit of small h, and apply it to calculate the corrections to the kink (quark) masses induced by the multi-kink fluctuations caused by the weak magnetic field. It is shown that the subleading third-order ∼h3 correction to the kink mass vanishes in the IFT. The leading second order ∼h2 correction to the kink mass in the 3-state PFT is estimated by truncation the infinite form factor expansion at the first term representing contribution of the two-kink fluctuations into the kink self-energy.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartas-Fuentevilla, Roberto; Escalante, Alberto; Germán, Gabriel

    Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aimsmore » we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally observed or astrophysically inferred photon mass, are far beyond its upper bound, positively testing the viability of our tachyonic braneworld. Moreover, the 5D parameters that define these corrections possess the same order, providing naturalness to our model, however, a fine-tuning between them is needed in order to fit the corresponding upper bound on the photon mass.« less

  7. Nonlinear Errors Resulting from Ghost Reflection and Its Coupling with Optical Mixing in Heterodyne Laser Interferometers

    PubMed Central

    Fu, Haijin; Wang, Yue; Tan, Jiubin; Fan, Zhigang

    2018-01-01

    Even after the Heydemann correction, residual nonlinear errors, ranging from hundreds of picometers to several nanometers, are still found in heterodyne laser interferometers. This is a crucial factor impeding the realization of picometer level metrology, but its source and mechanism have barely been investigated. To study this problem, a novel nonlinear model based on optical mixing and coupling with ghost reflection is proposed and then verified by experiments. After intense investigation of this new model’s influence, results indicate that new additional high-order and negative-order nonlinear harmonics, arising from ghost reflection and its coupling with optical mixing, have only a negligible contribution to the overall nonlinear error. In real applications, any effect on the Lissajous trajectory might be invisible due to the small ghost reflectance. However, even a tiny ghost reflection can significantly worsen the effectiveness of the Heydemann correction, or even make this correction completely ineffective, i.e., compensation makes the error larger rather than smaller. Moreover, the residual nonlinear error after correction is dominated only by ghost reflectance. PMID:29498685

  8. Calibration of piezoelectric RL shunts with explicit residual mode correction

    NASA Astrophysics Data System (ADS)

    Høgsberg, Jan; Krenk, Steen

    2017-01-01

    Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained by an extension of the local piezoelectric transducer displacement by two additional terms, representing the flexibility and inertia contributions from the residual vibration modes not directly addressed by the shunt damping. This results in an augmented dynamic model for the targeted resonant vibration mode, in which the residual contributions, represented by two correction factors, modify both the apparent transducer capacitance and the shunt circuit impedance. Explicit expressions for the correction of the shunt circuit inductance and resistance are presented in a form that is generally applicable to calibration formulae derived on the basis of an assumed single-mode structure, where modal interaction has been neglected. A design procedure is devised and subsequently verified by a numerical example, which demonstrates that effective mitigation can be obtained for an arbitrary vibration mode when the residual mode correction is included in the calibration of the RL shunt.

  9. Damage Detection for Historical Architectures Based on Tls Intensity Data

    NASA Astrophysics Data System (ADS)

    Li, Q.; Cheng, X.

    2018-04-01

    TLS (Terrestrial Laser Scanner) has long been preferred in the cultural heritage field for 3D documentation of historical sites thanks to its ability to acquire the geometric information without any physical contact. Besides the geometric information, most TLS systems also record the intensity information, which is considered as an important measurement of the spectral property of the scanned surface. Recent studies have shown the potential of using intensity for damage detection. However, the original intensity is affected by scanning geometry such as range and incidence angle and other factors, thus making the results less accurate. Therefore, in this paper, we present a method to detect certain damage areas using the corrected intensity data. Firstly, two data-driven models have been developed to correct the range and incidence angle effect. Then the corrected intensity is used to generate 2D intensity images for classification. After the damage areas being detected, they are re-projected to the 3D point cloud for better visual representation and further investigation. The experiment results indicate the feasibility and validity of the corrected intensity for damage detection.

  10. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  11. Study of model of anesthesia related adverse event by incident report at King Chulalongkorn Memorial Hospital.

    PubMed

    Narasethkamol, Arunchai; Charuluxananan, Somrat; Kyokong, Oranuch; Premsamran, Porntep; Kundej, Sarawut

    2011-01-01

    As a site of the Thai Anesthesia Incidents Monitoring Study (Thai AIMS), the authors continued data collection of incident reports to find out the frequency, clinical course, contributing factors, factors minimizing adverse events, and investigation of model appropriate for possible corrective strategies in a Thai university hospital. A standardized anesthesia incident report form that included close-end and open-end questions was provided to the attending anesthesia personnel of King Chulalongkorn Memorial Hospital between January I and December 31, 2007. They filled it on a voluntary and anonymous basis. Each incident report was reviewed by three reviewers. Any disagreement was discussed to achieve a consensus. One hundred sixty three incident reports were filled reporting 191 incidents. There were fewer male (44%) than female (56%) patients and they had an ASA physical status classification 1 (41%), 2 (43%), 3 (10%), 4 (4%) and 5 (2%). Surgical specialties that posed high risk of incidents were general, orthopedic, gynecological, otorhino-laryngological and urological surgery. Locations of incident were operating room (85%), ward (8%) and recovery room (2%). The common adverse incidents were oxygen desaturation (23%), arrhythmia needing treatment (14%), equipment malfunction (13%), drug error (9%), difficult intubation (6%), esophageal intubation (5%), cardiac arrest (5%), reintubation (4%), and endobronchial intubation (4%). Adverse events were detected by monitoring only (27%), by monitoring before clinical diagnosis (26%), by clinical diagnosis before monitoring (21%), and by clinical diagnosis only (26%). Incidents were considered to be from anesthesia related factor (73%), system factor (16%) and preventable (47%). Common factors related to incident were inexperience, lack of vigilance, haste, inappropriate decision, not comply with guidelines, and lack of equipment maintenance. Suggested corrective strategies were quality assurance activity, additional training, clinical practice guidelines, equipment maintenance, and improvement of supervision.

  12. Investigation of under-ascertainment in epidemiological studies based in general practice.

    PubMed

    Sethi, D; Wheeler, J; Rodrigues, L C; Fox, S; Roderick, P

    1999-02-01

    One of the aims of the Study of Infectious Intestinal Disease (IID) in England is to estimate the incidence of IID presenting to general practice. This sub-study aims to estimate and correct the degree of under-ascertainment in the national study. Cases of presumed IID which presented to general practice in the national study had been ascertained by their GP. In 26 general practices, cases with computerized diagnoses suggestive of IID were identified retrospectively. Cases which fulfilled the case definition of IID and should have been ascertained to the coordinating centre but were not, represented the under-ascertainment. Logistic regression modelling was used to identify independent factors which influenced under-ascertainment. The records of 2021 patients were examined, 1514 were eligible and should have been ascertained but only 974 (64%) were. There was variation in ascertainment between the practices (30% to 93%). Patient-related factors independently associated with ascertainment were: i) vomiting only as opposed to diarrhoea with and without vomiting (OR 0.37) and ii) consultation in the surgery as opposed to at home (OR 2.18). Practice-related factors independently associated with ascertainment were: i) participation in the enumeration study component (OR 1.78), ii) a larger number of partners (OR 0.3 for 7-8 partners); iii) rural location (OR 2.27) and iv) previous research experience (OR 1.92). Predicted ascertainment percentages were calculated according to practice characteristics. Under-ascertainment of IID was substantial (36%) and non-random and had to be corrected. Practice characteristics influencing variation in ascertainment were identified and a multivariate model developed to identify adjustment factors which could be applied to individual practices. Researchers need to be aware of factors which influence ascertainment in acute epidemiological studies based in general practice.

  13. Vertical spatial coherence model for a transient signal forward-scattered from the sea surface

    USGS Publications Warehouse

    Yoerger, E.J.; McDaniel, S.T.

    1996-01-01

    The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.

  14. Calibration of entrance dose measurement for an in vivo dosimetry programme.

    PubMed

    Ding, W; Patterson, W; Tremethick, L; Joseph, D

    1995-11-01

    An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.

  15. An interface finite element model can be used to predict healing outcome of bone fractures.

    PubMed

    Alierta, J A; Pérez, M A; García-Aznar, J M

    2014-01-01

    After fractures, bone can experience different potential outcomes: successful bone consolidation, non-union and bone failure. Although, there are a lot of factors that influence fracture healing, experimental studies have shown that the interfragmentary movement (IFM) is one of the main regulators for the course of bone healing. In this sense, computational models may help to improve the development of mechanical-based treatments for bone fracture healing. Hence, based on this fact, we propose a combined repair-failure mechanistic computational model to describe bone fracture healing. Despite being a simple model, it is able to correctly estimate the time course evolution of the IFM compared to in vivo measurements under different mechanical conditions. Therefore, this mathematical approach is especially suitable for modeling the healing response of bone to fractures treated with different mechanical fixators, simulating realistic clinical conditions. This model will be a useful tool to identify factors and define targets for patient specific therapeutics interventions. © 2013 Published by Elsevier Ltd.

  16. Comparison of fission product release predictions using PARFUME with results from the AGR-1 safety tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.

    Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less

  17. Comparison of fission product release predictions using PARFUME with results from the AGR-1 safety tests

    DOE PAGES

    Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; ...

    2016-04-07

    Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less

  18. Demographically Corrected Normative Standards for the Spanish Language Version of the NIH Toolbox Cognition Battery.

    PubMed

    Casaletto, Kaitlin B; Umlauf, Anya; Marquine, Maria; Beaumont, Jennifer L; Mungas, Daniel; Gershon, Richard; Slotkin, Jerry; Akshoomoff, Natacha; Heaton, Robert K

    2016-03-01

    Hispanics are the fastest growing ethnicity in the United States, yet there are limited well-validated neuropsychological tools in Spanish, and an even greater paucity of normative standards representing this population. The Spanish NIH Toolbox Cognition Battery (NIHTB-CB) is a novel neurocognitive screener; however, the original norms were developed combining Spanish- and English-versions of the battery. We developed normative standards for the Spanish NIHTB-CB, fully adjusting for demographic variables and based entirely on a Spanish-speaking sample. A total of 408 Spanish-speaking neurologically healthy adults (ages 18-85 years) and 496 children (ages 3-7 years) completed the NIH Toolbox norming project. We developed three types of scores: uncorrected based on the entire Spanish-speaking cohort, age-corrected, and fully demographically corrected (age, education, sex) scores for each of the seven NIHTB-CB tests and three composites (Fluid, Crystallized, Total Composites). Corrected scores were developed using polynomial regression models. Demographic factors demonstrated medium-to-large effects on uncorrected NIHTB-CB scores in a pattern that differed from that observed on the English NIHTB-CB. For example, in Spanish-speaking adults, education was more strongly associated with Fluid scores, but showed the strongest association with Crystallized scores among English-speaking adults. Demographic factors were no longer associated with fully corrected scores. The original norms were not successful in eliminating demographic effects, overestimating children's performances, and underestimating adults' performances on the Spanish NIHTB-CB. The disparate pattern of demographic associations on the Spanish versus English NIHTB-CB supports the need for distinct normative standards developed separately for each population. Fully adjusted scores presented here will aid in more accurately characterizing acquired brain dysfunction among U.S. Spanish-speakers.

  19. Analysis of position-dependent Compton scatter in scintimammography with mild compression

    NASA Astrophysics Data System (ADS)

    Williams, M. B.; Narayanan, D.; More, M. J.; Goodale, P. J.; Majewski, S.; Kieper, D. A.

    2003-10-01

    In breast scintigraphy using /sup 99m/Tc-sestamibi the relatively low radiotracer uptake in the breast compared to that in other organs such as the heart results in a large fraction of the detected events being Compton scattered gamma-rays. In this study, our goal was to determine whether generalized conclusions regarding scatter-to-primary ratios at various locations within the breast image are possible, and if so, to use them to make explicit scatter corrections to the breast scintigrams. Energy spectra were obtained from patient scans for contiguous regions of interest (ROIs) centered left to right within the image of the breast, and extending from the chest wall edge of the image to the anterior edge. An anthropomorphic torso phantom with fillable internal organs and a compressed-shape breast containing water only was used to obtain realistic position-dependent scatter-only spectra. For each ROI, the measured patient energy spectrum was fitted with a linear combination of the scatter-only spectrum from the anthropomorphic phantom and the scatter-free spectrum from a point source. We found that although there is a very strong dependence on location within the breast of the scatter-to-primary ratio, the spectra are well modeled by a linear combination of position-dependent scatter-only spectra and a position-independent scatter-free spectrum, resulting in a set of position-dependent correction factors. These correction factors can be used along with measured emission spectra from a given breast to correct for the Compton scatter in the scintigrams. However, the large variation among patients in the magnitude of the position-dependent scatter makes the success of universal correction approaches unlikely.

  20. Description of cervical cancer mortality in Belgium using Bayesian age-period-cohort models

    PubMed Central

    2009-01-01

    Objective To correct cervical cancer mortality rates for death cause certification problems in Belgium and to describe the corrected trends (1954-1997) using Bayesian models. Method Cervical cancer (cervix uteri (CVX), corpus uteri (CRP), not otherwise specified (NOS) uterus cancer and other very rare uterus cancer (OTH) mortality data were extracted from the WHO mortality database together with population data for Belgium and the Netherlands. Different ICD (International Classification of Diseases) were used over time for death cause certification. In the Netherlands, the proportion of not-otherwise specified uterine cancer deaths was small over large periods and therefore internal reallocation could be used to estimate the corrected rates cervical cancer mortality. In Belgium, the proportion of improperly defined uterus deaths was high. Therefore, the age-specific proportions of uterus cancer deaths that are probably of cervical origin for the Netherlands was applied to Belgian uterus cancer deaths to estimate the corrected number of cervix cancer deaths (corCVX). A Bayesian loglinear Poisson-regression model was performed to disentangle the separate effects of age, period and cohort. Results The corrected age standardized mortality rate (ASMR) decreased regularly from 9.2/100 000 in the mid 1950s to 2.5/100,000 in the late 1990s. Inclusion of age, period and cohort into the models were required to obtain an adequate fit. Cervical cancer mortality increases with age, declines over calendar period and varied irregularly by cohort. Conclusion Mortality increased with ageing and declined over time in most age-groups, but varied irregularly by birth cohort. In global, with some discrete exceptions, mortality decreased for successive generations up to the cohorts born in the 1930s. This decline stopped for cohorts born in the 1940s and thereafter. For the youngest cohorts, even a tendency of increasing risk of dying from cervical cancer could be observed, reflecting increased exposure to risk factors. The fact that this increase was limited for the youngest cohorts could be explained as an effect of screening. Bayesian modeling provided similar results compared to previously used classical Poisson models. However, Bayesian models are more robust for estimating rates when data are sparse (youngest age groups, most recent cohorts) and can be used to for predicting future trends.

  1. How does bias correction of RCM precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.

    2014-09-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  2. Biomechanical aspects of lower limb torsional deformation correction with the Ilizarov external fixator.

    PubMed

    Morasiewicz, Piotr; Filipiak, Jarosław; Krysztoforski, Krzysztof; Dragan, Szymon

    2014-03-01

    The correction of torsional deformities with the Ilizarov apparatus is accompanied by rotational and translational displacement, which affects the biomechanics of the bone fragments. Understanding the biomechanical factors will assist in designing the optimal treatment strategy and mechanical properties of the fixator, thus shortening the duration of treatment and improving the outcomes. In order to determine the impact of different types of derotators on the kinematics of bone fragments in Ilizarov apparatus, physical models were studied. Translational and derotational displacement was measured using non-contact method (Optotrak Certus Motion Capture System). The results of the studies conducted on physical models have shown that regardless of the type of the derotator, the divergence between the applied angle of derotation and the obtained angle of rotation relative to fragments needs to be taken into account. Transverse displacement of fragments occur by 3.5 mm to approximately 9 mm, depending on the angle of derotation. For correction of rotational deformities up to 30°, it is advisable to use the type Z derotators because of its higher accuracy of derotation. Different types of derotators can affect the biomechanical conditions in the regenerating bone tissue through different kinematics characteristics.

  3. Meta-analysis of the effect of road work zones on crash occurrence.

    PubMed

    Theofilatos, Athanasios; Ziakopoulos, Apostolos; Papadimitriou, Eleonora; Yannis, George; Diamandouros, Konstantinos

    2017-11-01

    There is strong evidence that work zones pose increased risk of crashes and injuries. The two most common risk factors associated with increased crash frequencies are work zone duration and length. However, relevant research on the topic is relatively limited. For that reason, this paper presents formal meta-analyses of studies that have estimated the relationship between the number of crashes and work zone duration and length, in order to provide overall estimates of those effects on crash frequencies. All studies presented in this paper are crash prediction models with similar specifications. According to the meta-analyses and after correcting for publication bias when it was considered appropriate, the summary estimates of regression coefficients were found to be 0.1703 for duration and 0.862 for length. These effects were significant for length but not for duration. However, the overall estimate of duration was significant before correcting for publication bias. Separate meta-analyses on the studies examining both duration and length was also carried out in order to have rough estimates of the combined effects. The estimate of duration was found to be 0.953, while for length was 0.847. Similar to previous meta-analyses the effect of duration after correcting for publication bias is not significant, while the effect of length was significant at a 95% level. Meta-regression findings indicate that the main factors influencing the overall estimates of the beta coefficients are study year and region for duration and study year and model specification for length. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Application of the epidemiological model in studying human error in aviation

    NASA Technical Reports Server (NTRS)

    Cheaney, E. S.; Billings, C. E.

    1981-01-01

    An epidemiological model is described in conjunction with the analytical process through which aviation occurrence reports are composed into the events and factors pertinent to it. The model represents a process in which disease, emanating from environmental conditions, manifests itself in symptoms that may lead to fatal illness, recoverable illness, or no illness depending on individual circumstances of patient vulnerability, preventive actions, and intervention. In the aviation system the analogy of the disease process is the predilection for error of human participants. This arises from factors in the operating or physical environment and results in errors of commission or omission that, again depending on the individual circumstances, may lead to accidents, system perturbations, or harmless corrections. A discussion of the previous investigations, each of which manifests the application of the epidemiological method, exemplifies its use and effectiveness.

  5. Control circuit maintains unity power factor of reactive load

    NASA Technical Reports Server (NTRS)

    Kramer, M.; Martinage, L. H.

    1966-01-01

    Circuit including feedback control elements automatically corrects the power factor of a reactive load. It maintains power supply efficiency where negative load reactance changes and varies by providing corrective error signals to the control windings of a power supply transformer.

  6. Experimental Verification of the Theory of Wind-Tunnel Boundary Interference

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore; Silverstein, Abe

    1935-01-01

    The results of an experimental investigation on the boundary-correction factor are presented in this report. The values of the boundary-correction factor from the theory, which at the present time is virtually completed, are given in the report for all conventional types of tunnels. With the isolation of certain disturbing effects, the experimental boundary-correction factor was found to be in satisfactory agreement with the theoretically predicted values, thus verifying the soundness and sufficiency of the theoretical analysis. The establishment of a considerable velocity distortion, in the nature of a unique blocking effect, constitutes a principal result of the investigation.

  7. A causal model explaining the relationships governing beliefs, attitudes, and hypnotic responsiveness.

    PubMed

    Shimizu, Takahiro

    2014-01-01

    The author developed a new scale aimed at measuring beliefs about "hypnotic states" and investigated the influence of such beliefs and attitudes on hypnotic responses in a large sample of Japanese undergraduate students. Exploratory factor analysis of this new questionnaire examining beliefs about hypnotic states yielded four factors: Dissociative or Depersonalized Experience, Loss of Self-Control, Therapeutic Expectation, and Arousing Extraordinary Ability. The results of structural equation modeling showed that Therapeutic Expectation and Arousing Extraordinary Ability influenced hypnotizability through attitudes toward hypnosis, while also directly affecting subjective experiences without mediating attitudes. Present findings suggest that it is more effective to enhance therapeutic expectations than to correct misconceptions about hypnotic states in modification of patients' beliefs before initiating treatment.

  8. Structure factor and radial distribution function of some liquid lanthanides using charged hard sphere

    NASA Astrophysics Data System (ADS)

    Patel, H. P.; Sonvane, Y. A.; Thakor, P. B.

    2017-05-01

    The structure factor S(q) and radial distribution function g(r) play vital role to study the various structural properties like electronic, dynamic, magnetic etc. The present paper deals with the structural studies of foresaid properties using our newly constructed parameter free model potential with the Charged Hard Sphere (CHS) approximation. The local field correction due to Sarkar et al. is used to incorporate exchange and correlation among the conduction electrons in dielectric screening. Here we report the S(q) and g(r) for some liquid lanthanides viz: La, Ce, Pr, Nd and Eu. Present computed results are compared with the available experimental data. Lastly we found that our parameter free model potential successfully explains the structural propertiesof4fliquidlanthanides.

  9. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  10. Air-braked cycle ergometers: validity of the correction factor for barometric pressure.

    PubMed

    Finn, J P; Maxwell, B F; Withers, R T

    2000-10-01

    Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.

  11. Controlling for anthropogenically induced atmospheric variation in stable carbon isotope studies

    USGS Publications Warehouse

    Long, E.S.; Sweitzer, R.A.; Diefenbach, D.R.; Ben-David, M.

    2005-01-01

    Increased use of stable isotope analysis to examine food-web dynamics, migration, transfer of nutrients, and behavior will likely result in expansion of stable isotope studies investigating human-induced global changes. Recent elevation of atmospheric CO2 concentration, related primarily to fossil fuel combustion, has reduced atmospheric CO2 ??13C (13C/12C), and this change in isotopic baseline has, in turn, reduced plant and animal tissue ??13C of terrestrial and aquatic organisms. Such depletion in CO2 ??13C and its effects on tissue ??13C may introduce bias into ??13C investigations, and if this variation is not controlled, may confound interpretation of results obtained from tissue samples collected over a temporal span. To control for this source of variation, we used a high-precision record of atmospheric CO2 ??13C from ice cores and direct atmospheric measurements to model modern change in CO2 ??13C. From this model, we estimated a correction factor that controls for atmospheric change; this correction reduces bias associated with changes in atmospheric isotopic baseline and facilitates comparison of tissue ??13C collected over multiple years. To exemplify the importance of accounting for atmospheric CO2 ??13C depletion, we applied the correction to a dataset of collagen ??13C obtained from mountain lion (Puma concolor) bone samples collected in California between 1893 and 1995. Before correction, in three of four ecoregions collagen ??13C decreased significantly concurrent with depletion of atmospheric CO2 ??13C (n ??? 32, P ??? 0.01). Application of the correction to collagen ??13C data removed trends from regions demonstrating significant declines, and measurement error associated with the correction did not add substantial variation to adjusted estimates. Controlling for long-term atmospheric variation and correcting tissue samples for changes in isotopic baseline facilitate analysis of samples that span a large temporal range. ?? Springer-Verlag 2005.

  12. Classification and correction of the radar bright band with polarimetric radar

    NASA Astrophysics Data System (ADS)

    Hall, Will; Rico-Ramirez, Miguel; Kramer, Stefan

    2015-04-01

    The annular region of enhanced radar reflectivity, known as the Bright Band (BB), occurs when the radar beam intersects a layer of melting hydrometeors. Radar reflectivity is related to rainfall through a power law equation and so this enhanced region can lead to overestimations of rainfall by a factor of up to 5, so it is important to correct for this. The BB region can be identified by using several techniques including hydrometeor classification and freezing level forecasts from mesoscale meteorological models. Advances in dual-polarisation radar measurements and continued research in the field has led to increased accuracy in the ability to identify the melting snow region. A method proposed by Kitchen et al (1994), a form of which is currently used operationally in the UK, utilises idealised Vertical Profiles of Reflectivity (VPR) to correct for the BB enhancement. A simpler and more computationally efficient method involves the formation of an average VPR from multiple elevations for correction that can still cause a significant decrease in error (Vignal 2000). The purpose of this research is to evaluate a method that relies only on analysis of measurements from an operational C-band polarimetric radar without the need for computationally expensive models. Initial results show that LDR is a strong classifier of melting snow with a high Critical Success Index of 97% when compared to the other variables. An algorithm based on idealised VPRs resulted in the largest decrease in error when BB corrected scans are compared to rain gauges and to lower level scans with a reduction in RMSE of 61% for rain-rate measurements. References Kitchen, M., R. Brown, and A. G. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Q.J.R. Meteorol. Soc., 120, 1231-1254. Vignal, B. et al, 2000: Three methods to determine profiles of reflectivity from volumetric radar data to correct precipitation estimates. J. Appl. Meteor., 39(10), 1715-1726.

  13. Oppositional Defiant Disorder toward Adults and Oppositional Defiant Disorder toward Peers: Initial Evidence for Two Separate Constructs

    ERIC Educational Resources Information Center

    Taylor, Ted K.; Burns, G. Leonard; Rusby, Julie C.; Foster, E. Michael

    2006-01-01

    Confirmatory factor analysis of 25 items on the Child and Adolescent Disruptive Behavior Inventory (CADBI, Version 2.3; G. L. Burns, T. K. Taylor, & J. C. Rusby, 2001) was conducted on teacher ratings of 824 kindergarten children and replicated on 534 children. Model fit was improved when correcting for 2 method effects: (a) adjacent items and…

  14. A Cyberspace Command and Control Model (Maxwell Paper, Number 47)

    DTIC Science & Technology

    2009-08-01

    mand [for] Network Warfare [JFCC-NW]).ŗ This arrange- ment further creates gaps and seams as the DOD attempts, through a traditional, hierarchical...pervasiveness of cyberspace coupled with aggressive adversaries create a volatile environment. The DOD. along with the rest of the world, is...these factors, along with the challenges of attributing cyberspace intrusions, combine to create an environment of frustrating ambiguity. The correct

  15. How does bias correction of regional climate model precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.

    2015-02-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  16. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  17. A multi-level approach for investigating socio-economic and agricultural risk factors associated with rates of reported cases of Escherichia coli O157 in humans in Alberta, Canada.

    PubMed

    Pearl, D L; Louie, M; Chui, L; Doré, K; Grimsrud, K M; Martin, S W; Michel, P; Svenson, L W; McEwen, S A

    2009-10-01

    Using negative binomial and multi-level Poisson models, the authors determined the statistical significance of agricultural and socio-economic risk factors for rates of reported disease associated with Escherichia coli O157 in census subdivisions (CSDs) in Alberta, Canada, 2000-2002. Variables relating to population stability, aboriginal composition of the CSDs, and the economic relationship between CSDs and urban centres were significant risk factors. The percentage of individuals living in low-income households was not a statistically significant risk factor for rates of disease. The statistical significance of cattle density, recorded at a higher geographical level, depended on the method used to correct for overdispersion, the number of levels included in the multi-level models, and the choice of using all reported cases or only sporadic cases. Our results highlight the importance of local socio-economic risk factors in determining rates of disease associated with E. coli O157, but their relationship with individual risk factors requires further evaluation.

  18. Accuracy of 1D microvascular flow models in the limit of low Reynolds numbers.

    PubMed

    Pindera, Maciej Z; Ding, Hui; Athavale, Mahesh M; Chen, Zhijian

    2009-05-01

    We describe results of numerical simulations of steady flows in tubes with branch bifurcations using fully 3D and reduced 1D geometries. The intent is to delineate the range of validity of reduced models used for simulations of flows in microcapillary networks, as a function of the flow Reynolds number Re. Results from model problems indicate that for Re less than 1 and possibly as high as 10, vasculatures may be represented by strictly 1D Poiseuille flow geometries with flow variation in the axial dimensions only. In that range flow rate predictions in the different branches generated by 1D and 3D models differ by a constant factor, independent of Re. When the cross-sectional areas of the branches are constant these differences are generally small and appear to stem from an uncertainty of how the individual branch lengths are defined. This uncertainty can be accounted for by a simple geometrical correction. For non-constant cross-sections the differences can be much more significant. If additional corrections for the presence of branch junctions and flow area variations are not taken into account in 1D models of complex vasculatures, the resultant flow predictions should be interpreted with caution.

  19. TU-F-CAMPUS-T-04: Variations in Nominally Identical Small Fields From Photon Jaw Reproducibility and Associated Effects On Small Field Dosimetric Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B R; McEwen, M R

    2015-06-15

    Purpose: To investigate uncertainties in small field output factors and detector specific correction factors from variations in field size for nominally identical fields using measurements and Monte Carlo simulations. Methods: Repeated measurements of small field output factors are made with the Exradin W1 (plastic scintillation detector) and the PTW microDiamond (synthetic diamond detector) in beams from the Elekta Precise linear accelerator. We investigate corrections for a 0.6x0.6 cm{sup 2} nominal field size shaped with secondary photon jaws at 100 cm source to surface distance (SSD). Measurements of small field profiles are made in a water phantom at 10 cm depthmore » using both detectors and are subsequently used for accurate detector positioning. Supplementary Monte Carlo simulations with EGSnrc are used to calculate the absorbed dose to the detector and absorbed dose to water under the same conditions when varying field size. The jaws in the BEAMnrc model of the accelerator are varied by a reasonable amount to investigate the same situation without the influence of measurements uncertainties (such as detector positioning or variation in beam output). Results: For both detectors, small field output factor measurements differ by up to 11 % when repeated measurements are made in nominally identical 0.6x0.6 cm{sup 2} fields. Variations in the FWHM of measured profiles are consistent with field size variations reported by the accelerator. Monte Carlo simulations of the dose to detector vary by up to 16 % under worst case variations in field size. These variations are also present in calculations of absorbed dose to water. However, calculated detector specific correction factors are within 1 % when varying field size because of cancellation of effects. Conclusion: Clinical physicists should be aware of potentially significant uncertainties in measured output factors required for dosimetry of small fields due to field size variations for nominally identical fields.« less

  20. Roi-Orientated Sensor Correction Based on Virtual Steady Reimaging Model for Wide Swath High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.

    2017-09-01

    To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.

Top