Sample records for sample size recalculation

  1. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  2. An internal pilot study for a randomized trial aimed at evaluating the effectiveness of iron interventions in children with non-anemic iron deficiency: the OptEC trial.

    PubMed

    Abdullah, Kawsari; Thorpe, Kevin E; Mamak, Eva; Maguire, Jonathon L; Birken, Catherine S; Fehlings, Darcy; Hanley, Anthony J; Macarthur, Colin; Zlotkin, Stanley H; Parkin, Patricia C

    2015-07-14

    The OptEC trial aims to evaluate the effectiveness of oral iron in young children with non-anemic iron deficiency (NAID). The initial sample size calculated for the OptEC trial ranged from 112-198 subjects. Given the uncertainty regarding the parameters used to calculate the sample, an internal pilot study was conducted. The objectives of this internal pilot study were to obtain reliable estimate of parameters (standard deviation and design factor) to recalculate the sample size and to assess the adherence rate and reasons for non-adherence in children enrolled in the pilot study. The first 30 subjects enrolled into the OptEC trial constituted the internal pilot study. The primary outcome of the OptEC trial is the Early Learning Composite (ELC). For estimation of the SD of the ELC, descriptive statistics of the 4 month follow-up ELC scores were assessed within each intervention group. The observed SD within each group was then pooled to obtain an estimated SD (S2) of the ELC. Correlation (ρ) between the ELC measured at baseline and follow-up was assessed. Recalculation of the sample size was performed using analysis of covariance (ANCOVA) method which uses the design factor (1- ρ(2)). Adherence rate was calculated using a parent reported rate of missed doses of the study intervention. The new estimate of the SD of the ELC was found to be 17.40 (S2). The design factor was (1- ρ2) = 0.21. Using a significance level of 5%, power of 80%, S2 = 17.40 and effect estimate (Δ) ranging from 6-8 points, the new sample size based on ANCOVA method ranged from 32-56 subjects (16-28 per group). Adherence ranged between 14% and 100% with 44% of the children having an adherence rate ≥ 86%. Information generated from our internal pilot study was used to update the design of the full and definitive trial, including recalculation of sample size, determination of the adequacy of adherence, and application of strategies to improve adherence. ClinicalTrials.gov Identifier: NCT01481766 (date of registration: November 22, 2011).

  3. Group-sequential three-arm noninferiority clinical trial designs

    PubMed Central

    Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko

    2016-01-01

    We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481

  4. Recalculation of the Critical Size and Multiplication Constant of a Homogeneous UO{sub 2}-D{sub 2}O Mixtures

    DOE R&D Accomplishments Database

    Wigner, E. P.; Weinberg, A. M.; Stephenson, J.

    1944-02-11

    The multiplication constant and optimal concentration of a slurry pile is recalculated on the basis of Mitchell's experiments on resonance absorption. The smallest chain reacting unit contains 45 to 55 m{sup 3}of d{sub 2}O. (auth).

  5. Recalculation of the Critical Size and Multiplication Constant of a Homogeneous UO{sub 2} - D{sub 2}O Mixtures

    DOE R&D Accomplishments Database

    Wigner, E. P.; Weinberg, A. M.; Stephenson, J.

    1944-02-11

    The multiplication constant and optimal concentration of a slurry pile is recalculated on the basis of Mitchell`s experiments on resonance absorption. The smallest chain reacting unit contains 45 to 55 m{sup 3}of D{sub 2}O. (auth)

  6. Re-estimating sample size in cluster randomised trials with active recruitment within clusters.

    PubMed

    van Schie, S; Moerbeek, M

    2014-08-30

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Characterization of Air Particles Giving False Responses with Biological Detectors

    DTIC Science & Technology

    1975-07-01

    Particle size distril)ution of SM particles 63 20- Scanning electron micrographs of typical aggregates of 21. SM bacteria 64 22. Scanning electron...for calcite (density = 2.75) were recalculated for bacteria (density ca 1.15). Both sets of size data are plotted in figure 13. The particle sizes given...Preceding page blank -23- Table 2. Particulate Substances Giving a CL Response >10 mV Algae Disodium phosphate Kelp Dandruff Sheep manure Lemon powder

  9. 40 CFR 60.705 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Volatile Organic Compound Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Reactor... used or where the reactor process vent stream is introduced as the primary fuel to any size boiler or... equipment or reactors; (2) Any recalculation of the TRE index value performed pursuant to § 60.704(f); and...

  10. 40 CFR 60.705 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Volatile Organic Compound Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Reactor... used or where the reactor process vent stream is introduced as the primary fuel to any size boiler or... equipment or reactors; (2) Any recalculation of the TRE index value performed pursuant to § 60.704(f); and...

  11. The (mis)reporting of statistical results in psychology journals.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2011-09-01

    In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.

  12. Preparation and investigation of dc conductivity and relative permeability of epoxy/Li-Ni-Zn ferrite composites

    NASA Astrophysics Data System (ADS)

    Darwish, M. A.; Saafan, S. A.; El-Kony, D.; Salahuddin, N. A.

    2015-07-01

    Ferrite nanoparticles - having the compositions Li(x/2)(Ni0.5Zn0.5)(1-x)Fe(2+x/2)O4 (x=0, 0.2, 0.3) - have been prepared by the co-precipitation method. The prepared powders have been divided into groups and sintered at different temperatures (373 K, 1074 K and 1473 K). X-Ray diffraction analysis (XRD) for all samples has confirmed the formation of the desired ferrites with crystallite sizes within the nanoscale (<100 nm). The dc conductivity, the relative permeability and the magnetization of the ferrite samples have been investigated and according to the results, the sample Li0.15(Ni0.5Zn0.5)0.7 Fe2.15O4 sintered at 1473 K has been chosen to prepare the composites. The particle size of this sample has been recalculated by using JEOL JEM-100SX transmission electron microscope and it has been found about 64.7 nm. Then, a pure epoxy sample and four pristine epoxy resin /Li0.15(Ni0.5Zn0.5)0.7 Fe2.15O4 composites have been prepared using different ferrite contents (20%, 30%, 40%, and 50%) wt.%. These samples have been characterized by Fourier transform infrared (FTIR) spectroscopy and their dc conductivity, relative permeability and magnetization have also been investigated. The obtained results indicate that the investigated composites may be promising candidates for practical applications such as EMI suppressor and high frequency applications.

  13. Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2017-12-01

    Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68

  14. Measuring VET Participation by Socioeconomic Status: An Examination of the Robustness of ABS SEIFA Measures over Time. Occasional Paper

    ERIC Educational Resources Information Center

    Lim, Patrick; Karmel, Tom

    2014-01-01

    At every five-yearly census, the Australian Bureau of Statistics (ABS) recalculates both the SEIFA (Socio-economic Indexes for Areas) indexes and also recalibrates the borders and sizes of the geographic areas from which these SEIFA measurements are derived. Further, over time, the composition of geographic areas may change, due to urban renewal…

  15. Pediatric reference intervals for general clinical chemistry components - merging of studies from Denmark and Sweden.

    PubMed

    Ridefelt, Peter; Hilsted, Linda; Juul, Anders; Hellberg, Dan; Rustad, Pål

    2018-05-28

    Reference intervals are crucial tools aiding clinicians when making medical decisions. However, for children such values often are lacking or incomplete. The present study combines data from separate pediatric reference interval studies of Denmark and Sweden in order to increase sample size and to include also pre-school children who were lacking in the Danish study. Results from two separate studies including 1988 healthy children and adolescents aged 6 months to 18 years of age were merged and recalculated. Eighteen general clinical chemistry components were measured on Abbott and Roche platforms. To facilitate commutability, the NFKK Reference Serum X was used. Age- and gender-specific pediatric reference intervals were defined by calculating 2.5 and 97.5 percentiles. The data generated are primarily applicable to a Nordic population, but could be used by any laboratory if validated for the local patient population.

  16. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  17. SU-E-T-792: Validation of a Secondary TPS for IROC-H Recalculation of Anthropomorphic Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Howell, R; Followill, D

    2015-06-15

    Purpose: To validate a secondary treatment planning system (sTPS) for use by the Imaging & Radiation Oncology Core-Houston (IROC-H). The TPS will recalculate phantom irradiations submitted by institutions to IROC-H and compare plan results of the institution to the sTPS. Methods: In-field dosimetric data was collected by IROC-H for numerous linacs at 6, 10, 15, and 18 MV. The data was aggregated and used to define reference linac classes; each class was then modeled in the sTPS (Mobius3D) by matching the in-field characteristics. Fields used to collect IROC-H data were recreated and recalculated using Mobius3D. The same dosimetric points weremore » measured in the recalculation and compared to the initial collection data. Additionally, a 6MV Monte Carlo beam configuration was used to compare penumbrae in the Mobius3D models. Finally, a handful of IROC-H head and neck phantoms were recalculated using Mobius3D. Results: Recalculation and quantification of differences between reference data and Mobius3D values resulted in a relative matching score of 12.45 (0 is a perfect match) for the default 6MV Mobius3D beam configuration. By adjusting beam configuration options, iterations resulted in scores of 8.45, 6.32, and 3.52, showing that customization could have a dramatic effect on beam configuration. After in-field optimization, penumbra was compared between Monte Carlo and Mobius3D for the reference fields. For open jaw fields, FWHM field widths and penumbra widths were different by <0.6 and <1mm respectively; for MLC open fields the penumbra widths were up to 1.5mm different. Phantom recalculations showed good agreement, having an average of 0.6% error per beam. Conclusion: A secondary TPS has been validated for simple irradiation geometries using reference data collected by IROC-H. The beam was customized to the reference data iteratively and resulted in a good match. This system can provide independent recalculation of phantom plans based on independent reference data.« less

  18. 40 CFR 61.207 - Radium-226 sampling and measurement procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...

  19. TU-F-17A-08: The Relative Accuracy of 4D Dose Accumulation for Lung Radiotherapy Using Rigid Dose Projection Versus Dose Recalculation On Every Breathing Phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamb, J; Lee, C; Tee, S

    2014-06-15

    Purpose: To investigate the accuracy of 4D dose accumulation using projection of dose calculated on the end-exhalation, mid-ventilation, or average intensity breathing phase CT scan, versus dose accumulation performed using full Monte Carlo dose recalculation on every breathing phase. Methods: Radiotherapy plans were analyzed for 10 patients with stage I-II lung cancer planned using 4D-CT. SBRT plans were optimized using the dose calculated by a commercially-available Monte Carlo algorithm on the end-exhalation 4D-CT phase. 4D dose accumulations using deformable registration were performed with a commercially available tool that projected the planned dose onto every breathing phase without recalculation, as wellmore » as with a Monte Carlo recalculation of the dose on all breathing phases. The 3D planned dose (3D-EX), the 3D dose calculated on the average intensity image (3D-AVE), and the 4D accumulations of the dose calculated on the end-exhalation phase CT (4D-PR-EX), the mid-ventilation phase CT (4D-PR-MID), and the average intensity image (4D-PR-AVE), respectively, were compared against the accumulation of the Monte Carlo dose recalculated on every phase. Plan evaluation metrics relating to target volumes and critical structures relevant for lung SBRT were analyzed. Results: Plan evaluation metrics tabulated using 4D-PR-EX, 4D-PR-MID, and 4D-PR-AVE differed from those tabulated using Monte Carlo recalculation on every phase by an average of 0.14±0.70 Gy, - 0.11±0.51 Gy, and 0.00±0.62 Gy, respectively. Deviations of between 8 and 13 Gy were observed between the 4D-MC calculations and both 3D methods for the proximal bronchial trees of 3 patients. Conclusions: 4D dose accumulation using projection without re-calculation may be sufficiently accurate compared to 4D dose accumulated from Monte Carlo recalculation on every phase, depending on institutional protocols. Use of 4D dose accumulation should be considered when evaluating normal tissue complication probabilities as well as in clinical situations where target volumes are directly inferior to mobile critical structures.« less

  20. Prospective Optimization with Limited Resources

    PubMed Central

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-01-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309

  1. Prospective Optimization with Limited Resources.

    PubMed

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-09-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.

  2. Measurement of effective air diffusion coefficients for trichloroethene in undisturbed soil cores.

    PubMed

    Bartelt-Hunt, Shannon L; Smith, James A

    2002-06-01

    In this study, we measure effective diffusion coefficients for trichloroethene in undisturbed soil samples taken from Picatinny Arsenal, New Jersey. The measured effective diffusion coefficients ranged from 0.0053 to 0.0609 cm2/s over a range of air-filled porosity of 0.23-0.49. The experimental data were compared to several previously published relations that predict diffusion coefficients as a function of air-filled porosity and porosity. A multiple linear regression analysis was developed to determine if a modification of the exponents in Millington's [Science 130 (1959) 100] relation would better fit the experimental data. The literature relations appeared to generally underpredict the effective diffusion coefficient for the soil cores studied in this work. Inclusion of a particle-size distribution parameter, d10, did not significantly improve the fit of the linear regression equation. The effective diffusion coefficient and porosity data were used to recalculate estimates of diffusive flux through the subsurface made in a previous study performed at the field site. It was determined that the method of calculation used in the previous study resulted in an underprediction of diffusive flux from the subsurface. We conclude that although Millington's [Science 130 (1959) 100] relation works well to predict effective diffusion coefficients in homogeneous soils with relatively uniform particle-size distributions, it may be inaccurate for many natural soils with heterogeneous structure and/or non-uniform particle-size distributions.

  3. [Harmonization of TSH Measurements.

    PubMed

    Takeoka, Keiko; Hidaka, Yoh; Hishinuma, Akira; Ikeda, Katsuyoshi; Okubo, Shigeo; Tsuchiya, Tatsuyuki; Hashiguchi, Teruto; Furuta, Koh; Hotta, Taeko; Matsushita, Kazuyuki; Matsumoto, Hiroyuki; Murakami, Masami; Maekawa, Masato

    2016-05-01

    The measured concentration of thyroid stimulating hormone (TSH) differs depending on the reagents used. Harmonization of TSH is crucial because the decision limits are described in current clinical practice guide- lines as absolute values, e.g. 2.5 mIU/L in early pregnancy. In this study, we tried to harmonize the report- ed concentrations of TSH using the all-procedure trimmed mean. TSH was measured in 146 serum samples, with values ranging from 0.01 to 18.8 mIU/L, using 4 immunoassays. The concentration of TSH was highest with E test TOSOH and lowest with LUMIPULSE. The concentrations with each reagent were recalculated with the following formulas: E test TOSOH 0.855x-0.014; ECLusys 0.993x+0.079; ARCHITECT 1.041x- 0.010; and LUMIPULSE 1.096x-0.015. Recalculation eliminated the between-assay discrepancy. These formulas may be used until harmonization of TSH is achieved by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC).

  4. Anisotropy of diamagnetic susceptibility in Thassos marble: A comparison between measured and modeled data

    NASA Astrophysics Data System (ADS)

    de Wall, Helga; Bestmann, Michel; Ullemeyer, Klaus

    2000-11-01

    A study of shear zones within the calcite marble complex of the island of Thassos (Greece) shows that the low field anisotropy of magnetic susceptibility (AMS)-technique can be successfully applied to diamagnetic rocks for characterizing rock fabrics. The strain path involves both an early pure shear stage and a simple shear overprint that is documented by a transition from triaxial (neutral) to uniaxial (prolate) shapes of AMS ellipsoids. The maximum susceptibility is oriented perpendicular to the rock foliation, reflecting the preferred orientation of calcite c-axes in the protolith as well as in the mylonites. For three samples that represent different types of calcite fabrics, the AMS was recalculated from neutron and electron backscatter diffraction textural data. A comparison of the measured and modeled data shows a good coincidence for the orientation of the principal AMS axes and for the recalculated anisotropy data. Both measured and modeled data sets reflect the change from neutral to distinct prolate ellipsoids during progressive deformation.

  5. Marine sediment sample preparation for analysis for low concentrations of fine detrital gold

    USGS Publications Warehouse

    Clifton, H. Edward; Hubert, Arthur; Phillips, R. Lawrence

    1967-01-01

    Analyses by atomic absorption for detrital gold in more than 2,000 beach, offshore, marine-terrace, and alluvial sands from southern Oregon have shown that the values determined from raw or unconcentrated sediment containing small amounts of gold are neither reproducible nor representative of the initial sample. This difficulty results from a 'particle sparsity effect', whereby the analysis for gold in a given sample depends more upon the occurrence of random flakes of gold in the analyzed portion than upon the actual gold content of the sample. The particle sparsity effect can largely be eliminated by preparing a gold concentrate prior to analysis. A combination of sieve, gravimetric, and magnetic separation produces a satisfactory concentrate that yields accurate and reproducible analyses. In concentrates of nearly every marine and beach sand studied, the gold occurs in the nonmagnetic fraction smaller than 0.124 mm and with a specific gravity greater than 3.3. The grain size of gold in stream sediments is somewhat more variable. Analysis of concentrates provides a means of greatly increasing the sensitivity of the analytical technique in relation to the initial sample. Gold rarely exceeds 1 part per million in even the richest black sand analyzed; to establish the distribution of gold (and platinum) in marine sediments and its relationship to source and environmental factors, one commonly needs to know their content to the part per billion range. Analysis of a concentrate and recalculation to the value in the initial sample permits this degree of sensitivity.

  6. 34 CFR 686.35 - Recalculation of TEACH Grant award amounts.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 4 2014-07-01 2014-07-01 false Recalculation of TEACH Grant award amounts. 686.35 Section 686.35 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION (CONTINUED) TEACHER EDUCATION ASSISTANCE FOR COLLEGE AND HIGHER...

  7. 34 CFR 686.35 - Recalculation of TEACH Grant award amounts.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 4 2012-07-01 2012-07-01 false Recalculation of TEACH Grant award amounts. 686.35 Section 686.35 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION (CONTINUED) TEACHER EDUCATION ASSISTANCE FOR COLLEGE AND HIGHER...

  8. 34 CFR 686.35 - Recalculation of TEACH Grant award amounts.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 4 2013-07-01 2013-07-01 false Recalculation of TEACH Grant award amounts. 686.35 Section 686.35 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION (CONTINUED) TEACHER EDUCATION ASSISTANCE FOR COLLEGE AND HIGHER...

  9. 34 CFR 686.35 - Recalculation of TEACH Grant award amounts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 4 2011-07-01 2011-07-01 false Recalculation of TEACH Grant award amounts. 686.35 Section 686.35 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION (CONTINUED) TEACHER EDUCATION ASSISTANCE FOR COLLEGE AND HIGHER...

  10. 34 CFR 686.35 - Recalculation of TEACH Grant award amounts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Recalculation of TEACH Grant award amounts. 686.35 Section 686.35 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION TEACHER EDUCATION ASSISTANCE FOR COLLEGE AND HIGHER EDUCATION...

  11. Size distribution of ions in atmospheric aerosols

    NASA Astrophysics Data System (ADS)

    Krivácsy, Z.; Molnár, Á.

    The aim of this paper is to present data about the concentration and size distribution of ions in atmospheric aerosol under slightly polluted urban conditions in Hungary. Concentration of inorganic cations (ammonium, sodium, potassium, calcium, magnesium), inorganic anions (sulfate, nitrate, chloride, carbonate) and organic acids (oxalic, malonic, succinic, formic and acetic acid) for 8 particle size range between 0.0625 and 16 μm were determined. As was the case for ammonium, sulfate and nitrate, the organic acids were mostly found in the fine particle size range. Potassium and chloride were rather uniformly distributed between fine and coarse particles. Sodium, calcium, magnesium and carbonate were practically observed in the coarse mode. The results obtained for the summer and the winter half-year were also compared. The mass concentrations were recalculated in equivalents, and the ion balance was found to be reasonable in most cases. Measurement of the pH of the aerosol extracts indicates that the aerosol is acidic in the fine mode, but alkaline in the coarse particle size range.

  12. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  13. 34 CFR 690.80 - Recalculation of a Federal Pell Grant award.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... such recalculations must take into account any changes in the cost of attendance. If such a policy is... Grant award for the new payment period taking into account any changes in the cost of attendance. (2)(i... begun attendance in all of his or her classes for that payment period, the institution may (but is not...

  14. An Analysis of the Published Mineral Resource Estimates of the Haji-Gak Iron Deposit, Afghanistan

    USGS Publications Warehouse

    Sutphin, D.M.; Renaud, K.M.; Drew, L.J.

    2011-01-01

    The Haji-Gak iron deposit of eastern Bamyan Province, eastern Afghanistan, was studied extensively and resource calculations were made in the 1960s by Afghan and Russian geologists. Recalculation of the resource estimates verifies the original estimates for categories A (in-place resources known in detail), B (in-place resources known in moderate detail), and C 1 (in-place resources estimated on sparse data), totaling 110. 8 Mt, or about 6% of the resources as being supportable for the methods used in the 1960s. C 2 (based on a loose exploration grid with little data) resources are based on one ore grade from one drill hole, and P 2 (prognosis) resources are based on field observations, field measurements, and an ore grade derived from averaging grades from three better sampled ore bodies. C 2 and P 2 resources are 1,659. 1 Mt or about 94% of the total resources in the deposit. The vast P 2 resources have not been drilled or sampled to confirm their extent or quality. The purpose of this article is to independently evaluate the resources of the Haji-Gak iron deposit by using the available geologic and mineral resource information including geologic maps and cross sections, sampling data, and the analog-estimating techniques of the 1960s to determine the size and tenor of the deposit. ?? 2011 International Association for Mathematical Geology (outside the USA).

  15. Regional distribution patterns of chemical parameters in surface sediments of the south-western Baltic Sea and their possible causes

    NASA Astrophysics Data System (ADS)

    Leipe, T.; Naumann, M.; Tauber, F.; Radtke, H.; Friedland, R.; Hiller, A.; Arz, H. W.

    2017-12-01

    This study presents selected results of a sediment geochemical mapping program of German territorial waters in the south-western Baltic Sea. The field work was conducted mainly during the early 2000s. Due to the strong variability of sediment types in the study area, it was decided to separate and analyse the fine fraction (<63 μm, mud) from more than 600 surficial samples, combined with recalculations for the bulk sediment. For the contents of total organic carbon (TOC) and selected elements (P, Hg), the regional distribution maps show strong differences between the analysed fine fraction and the recalculated total sediment. Seeing that mud contents vary strongly between 0 and 100%, this can be explained by the well-known grain-size effect. To avoid (or at least minimise) this effect, further interpretations were based on the data for the fine fraction alone. Lateral transport from the large Oder River estuary combined with high abundances and activities of benthic fauna on the shallow-water Oder Bank (well sorted fine sand) could be some main causes for hotspots identified in the fine-fraction element distribution. The regional pattern of primary production as the main driver of nutrient element fixation (C, N, P, Si) was found to be only weakly correlated with, for example, the TOC distribution in the fine fraction. This implies that, besides surface sediment dynamics, local conditions (e.g. benthic secondary production) also have strong impacts. To the best of the authors' knowledge, there is no comparable study with geochemical analyses of the fine fraction of marine sediments to this extent (13,600 km2) and coverage (between 600 and 800 data points) in the Baltic Sea. This aspect proved pivotal in confidently pinpointing geochemical "anomalies" in surface sediments of the south-western Baltic Sea.

  16. Decompressive Surgery for the Treatment of Malignant Infarction of the Middle Cerebral Artery (DESTINY): a randomized, controlled trial.

    PubMed

    Jüttler, Eric; Schwab, Stefan; Schmiedek, Peter; Unterberg, Andreas; Hennerici, Michael; Woitzik, Johannes; Witte, Steffen; Jenetzky, Ekkehart; Hacke, Werner

    2007-09-01

    Decompressive surgery (hemicraniectomy) for life-threatening massive cerebral infarction represents a controversial issue in neurocritical care medicine. We report here the 30-day mortality and 6- and 12-month functional outcomes from the DESTINY trial. DESTINY (ISRCTN01258591) is a prospective, multicenter, randomized, controlled, clinical trial based on a sequential design that used mortality after 30 days as the first end point. When this end point was reached, patient enrollment was interrupted as per protocol until recalculation of the projected sample size was performed on the basis of the 6-month outcome (primary end point=modified Rankin Scale score, dichotomized to 0 to 3 versus 4 to 6). All analyses were based on intention to treat. A statistically significant reduction in mortality was reached after 32 patients had been included: 15 of 17 (88%) patients randomized to hemicraniectomy versus 7 of 15 (47%) patients randomized to conservative therapy survived after 30 days (P=0.02). After 6 and 12 months, 47% of patients in the surgical arm versus 27% of patients in the conservative treatment arm had a modified Rankin Scale score of 0 to 3 (P=0.23). DESTINY showed that hemicraniectomy reduces mortality in large hemispheric stroke. With 32 patients included, the primary end point failed to demonstrate statistical superiority of hemicraniectomy, and the projected sample size was calculated to 188 patients. Despite this failure to meet the primary end point, the steering committee decided to terminate the trial in light of the results of the joint analysis of the 3 European hemicraniectomy trials.

  17. The use of megavoltage CT (MVCT) images for dose recomputations

    NASA Astrophysics Data System (ADS)

    Langen, K. M.; Meeks, S. L.; Poole, D. O.; Wagner, T. H.; Willoughby, T. R.; Kupelian, P. A.; Ruchala, K. J.; Haimerl, J.; Olivera, G. H.

    2005-09-01

    Megavoltage CT (MVCT) images of patients are acquired daily on a helical tomotherapy unit (TomoTherapy, Inc., Madison, WI). While these images are used primarily for patient alignment, they can also be used to recalculate the treatment plan for the patient anatomy of the day. The use of MVCT images for dose computations requires a reliable CT number to electron density calibration curve. In this work, we tested the stability of the MVCT numbers by determining the variation of this calibration with spatial arrangement of the phantom, time and MVCT acquisition parameters. The two calibration curves that represent the largest variations were applied to six clinical MVCT images for recalculations to test for dosimetric uncertainties. Among the six cases tested, the largest difference in any of the dosimetric endpoints was 3.1% but more typically the dosimetric endpoints varied by less than 2%. Using an average CT to electron density calibration and a thorax phantom, a series of end-to-end tests were run. Using a rigid phantom, recalculated dose volume histograms (DVHs) were compared with plan DVHs. Using a deformed phantom, recalculated point dose variations were compared with measurements. The MVCT field of view is limited and the image space outside this field of view can be filled in with information from the planning kVCT. This merging technique was tested for a rigid phantom. Finally, the influence of the MVCT slice thickness on the dose recalculation was investigated. The dosimetric differences observed in all phantom tests were within the range of dosimetric uncertainties observed due to variations in the calibration curve. The use of MVCT images allows the assessment of daily dose distributions with an accuracy that is similar to that of the initial kVCT dose calculation.

  18. Budget Update: 2009-10 Operating Grant Estimates--What Changed between March Estimates and the Autumn Recalculation? BCTF Research Report. Section V. 2010-EF-01

    ERIC Educational Resources Information Center

    White, Margaret

    2010-01-01

    In March of each year, the ministry publishes the Operating Grants Manual showing estimated funding allocations for school districts for the upcoming school year. These estimates are based on enrolment projections. On September 30 of the new school year, enrolment is counted and the grants are recalculated based on actual enrolment. The ministry…

  19. Simpson's paradox visualized: The example of the Rosiglitazone meta-analysis

    PubMed Central

    Rücker, Gerta; Schumacher, Martin

    2008-01-01

    Background Simpson's paradox is sometimes referred to in the areas of epidemiology and clinical research. It can also be found in meta-analysis of randomized clinical trials. However, though readers are able to recalculate examples from hypothetical as well as real data, they may have problems to easily figure where it emerges from. Method First, two kinds of plots are proposed to illustrate the phenomenon graphically, a scatter plot and a line graph. Subsequently, these can be overlaid, resulting in a overlay plot. The plots are applied to the recent large meta-analysis of adverse effects of rosiglitazone on myocardial infarction and to an example from the literature. A large set of meta-analyses is screened for further examples. Results As noted earlier by others, occurrence of Simpson's paradox in the meta-analytic setting, if present, is associated with imbalance of treatment arm size. This is well illustrated by the proposed plots. The rosiglitazone meta-analysis shows an effect reversion if all trials are pooled. In a sample of 157 meta-analyses, nine showed an effect reversion after pooling, though non-significant in all cases. Conclusion The plots give insight on how the imbalance of trial arm size works as a confounder, thus producing Simpson's paradox. Readers can see why meta-analytic methods must be used and what is wrong with simple pooling. PMID:18513392

  20. The International Standard for Oxytetracycline

    PubMed Central

    Humphrey, J. H.; Lightbown, J. W.; Mussett, M. V.; Perry, W. L. M.

    1955-01-01

    The first attempt to set up an international standard for oxytetracycline, using oxytetracycline hydrochloride, failed because of difficulties in obtaining a preparation whose moisture content was uniform after distribution into ampoules. A preparation of dihydrate of oxytetracycline base was obtained instead, and was compared in an international collaborative assay with a sample of oxytetracycline hydrochloride, which was the current working standard of Chas. Pfizer & Co., Inc., USA. The results of the collaborative assay showed that the potency of the dihydrate was uniform, and that it was a suitable preparation for use as the International Standard. Evidence was obtained, however, that the reference preparation at the time of examination was less potent than had been originally supposed, and that it was hydrated. The potency of the proposed international standard was recalculated after allowance for water in the reference preparation, and the resulting biological potency agreed well with that to be expected on the basis of the physicochemical properties of the preparation. It was agreed, therefore, that the recalculated values should be used, and the preparation of oxytetracycline base dihydrate used in the collaborative assay is established as the International Standard for Oxytetracycline with a potency of 900 International Units per mg. PMID:13284563

  1. Assessing the Effect of Stellar Companions to Kepler Objects of Interest

    NASA Astrophysics Data System (ADS)

    Hirsch, Lea; Ciardi, David R.; Howard, Andrew

    2017-01-01

    Unknown stellar companions to Kepler planet host stars dilute the transit signal, causing the planetary radii to be underestimated. We report on the analysis of 165 stellar companions detected with high-resolution imaging to be within 2" of 159 KOI host stars. The majority of the planets and planet candidates in these systems have nominal radii smaller than 6 REarth. Using multi-filter photometry on each companion, we assess the likelihood that the companion is bound and estimate its stellar properties, including stellar radius and flux. We then recalculate the planet radii in these systems, determining how much each planet's size is underestimated if it is assumed to 1) orbit the primary star, 2) orbit the companion star, or 3) be equally likely to orbit either star in the system. We demonstrate the overall effect of unknown stellar companions on our understanding of Kepler planet sizes.

  2. Comparison of Acuros (AXB) and Anisotropic Analytical Algorithm (AAA) for dose calculation in treatment of oesophageal cancer: effects on modelling tumour control probability.

    PubMed

    Padmanaban, Sriram; Warren, Samantha; Walsh, Anthony; Partridge, Mike; Hawkins, Maria A

    2014-12-23

    To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p < 0.05) for VMAT AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy -1.5 Gy; p < 0.05). An apparent difference in TCP of between 1.2% and 3.1% was found depending on the choice of TCP model. OAR mean dose was lower in the AXB recalculated plan than the AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans. Differences in dose distribution are observed with VMAT and CRT plans recalculated with AXB particularly within soft tissue at the tumour/lung interface, where AXB has been shown to more accurately represent the true dose distribution. AAA apparently overestimates dose, particularly the PTV median dose and GTV mean dose, which could result in a difference in TCP model parameters that reaches clinical significance.

  3. Stability Analysis of Active Landslide Region in Gerze (Sinop), NW Turkey

    NASA Astrophysics Data System (ADS)

    Çellek, S.; Bulut, F.

    2009-04-01

    Landslides occurring in Turkey causes loss of life and property like many countries of the world. In Turkey, especially Black sea region, landslides are investigated in scale of village, province and city. In this study, Gerze town of Sinop located in Western Black sea was chosen as study area. The study area has some sensitive regions to landslides because of geology, geomorphology and climate conditions. Landslides occur due to heavy rains happen and snow melts in spring time. Recent, landslides occurring in costal areas have effected on choosing the Gerze area. In order to determine the landslides occurring in Gerze, field and laboratory studies have been carried out. 30 sample location was choose as 5 sample per each km2. After determining landslide areas, distributed and undistributed samples were taken for laboratory experiments and clay content. Based on field studies and laboratory experiment, five landslides were determined, called Deniz Feneri, Zenginler Sitesi, Bedre, Mezbahane and Ucuk. These landslides are still active, and their slopes are unstable. So, tension crakes are still seen behind of landslide main mirrors. Of them, The Uçuk landslide has two different secondary slope surfaces and is also reactive landslide. Springs are observed in both slope surface of the Uçuk landslide. The Mezbahane landslide has circular slope plane and tongue shape. The Deniz Feneri and Zenginler Sitesi landslides show different types of, activity and water content so they maybe classified as complex structure. The Deniz Feneri landslide has some tension crack, between 60 cm and 80 cm in depth. The Bedre landslide has half moon shape. GPS measurements in The Deniz Feneri and The Uçuk landslides were calculated to find out safety factors stable5 program according to Janbu and Bishop methods. The safety factors of the Deniz Feneri and The Uçuk landslides are between 0.489 -0.418 and 0.635-0.608, respectively. Since these landslides were affected negatively by Samsun-Sinop highway, load and brook, these negative effects were eliminated and safety factors recalculated. The recalculated safety factors between 0.855-0.889 and 0.976-0.905, respectively. It was determined that water content and loads have effected negatively to these landslides. The study area consists of sedimentary rocks. Most of landslides in study area occurred in weathered soil. Geotechnical properties of the soil samples collected; the specific unit weight 2.60 and 2.80 gr/cm3, water content 15% and 33%, natural unit weight 2.610 and 2.09 gr/cm3, dry unit weight, 1.28 and 1.86 gr/cm3, porosity 29% and 61%. The soil samples contain 27.49% clay, 29.92% silt, 11.08% sand, and 11.33% gravel based on grain size distribution. Soil samples have liquid limit values between 36% and 75% and plasticity index values, between 13% and 45%. The Soil samples for the study area show high and very high plasticity, called solid and very solid. According to USCS most of the soil samples were determined unctuous clay. Clays can be classified normal and non-active and their swelling potential medium-high. Cohesion of soils the samples are between 0.027 and 0.579 kg/cm2, internal friction angels are between 29.5 and 7.53, free compressive strength are between 1.89 and 5.5 kg/ cm2.

  4. A comparison of gas geochemistry of fumaroles in the 1912 ash-flow sheet and on active stratovolcanoes, Katmai National Park, Alaska

    USGS Publications Warehouse

    Sheppard, D.S.; Janik, C.J.; Keith, T.E.C.

    1992-01-01

    Fumarolic gas samples collected in 1978 and 1979 from the stratovolcanoes Mount Griggs, Mount Mageik, and the 1953-68 SW Trident cone in Katmai National Park, Alaska, have been analysed and the results presented here. Comparison with recalculated analyses of samples collected from the Valley of Ten Thousand Smokes (VTTS) in 1917 and 1919 demonstrates differences between gases from the short-lived VTTS fumaroles, which were not directly magma related, and the fumaroles on the volcanic peaks. Fumarolic gases of Mount Griggs have an elevated total He content, suggesting a more direct deep crustal or mantle source for these gases than those from the other volcanoes. ?? 1992.

  5. Robustness of the Voluntary Breath-Hold Approach for the Treatment of Peripheral Lung Tumors Using Hypofractionated Pencil Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dueck, Jenny, E-mail: jenny.dueck@psi.ch; Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI; Niels Bohr Institute, University of Copenhagen, Copenhagen

    Purpose: The safe clinical implementation of pencil beam scanning (PBS) proton therapy for lung tumors is complicated by the delivery uncertainties caused by breathing motion. The purpose of this feasibility study was to investigate whether a voluntary breath-hold technique could limit the delivery uncertainties resulting from interfractional motion. Methods and Materials: Data from 15 patients with peripheral lung tumors previously treated with stereotactic radiation therapy were included in this study. The patients had 1 computed tomographic (CT) scan in voluntary breath-hold acquired before treatment and 3 scans during the treatment course. PBS proton treatment plans with 2 fields (2F) andmore » 3 fields (3F), respectively, were calculated based on the planning CT scan and subsequently recalculated on the 3 repeated CT scans. Recalculated plans were considered robust if the V{sub 95%} (volume receiving ≥95% of the prescribed dose) of the gross target volume (GTV) was within 5% of what was expected from the planning CT data throughout the simulated treatment. Results: A total of 14/15 simulated treatments for both 2F and 3F met the robustness criteria. Reduced V{sub 95%} was associated with baseline shifts (2F, P=.056; 3F, P=.008) and tumor size (2F, P=.025; 3F, P=.025). Smaller tumors with large baseline shifts were also at risk for reduced V{sub 95%} (interaction term baseline/size: 2F, P=.005; 3F, P=.002). Conclusions: The breath-hold approach is a realistic clinical option for treating lung tumors with PBS proton therapy. Potential risk factors for reduced V{sub 95%} are small targets in combination with large baseline shifts. On the basis of these results, the baseline shift of the tumor should be monitored (eg, through image guided therapy), and appropriate measures should be taken accordingly. The intrafractional motion needs to be investigated to confirm that the breath-hold approach is robust.« less

  6. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  7. Recalculation of dose for each fraction of treatment on TomoTherapy.

    PubMed

    Thomas, Simon J; Romanchikova, Marina; Harrison, Karl; Parker, Michael A; Bates, Amy M; Scaife, Jessica E; Sutcliffe, Michael P F; Burnet, Neil G

    2016-01-01

    The VoxTox study, linking delivered dose to toxicity requires recalculation of typically 20-37 fractions per patient, for nearly 2000 patients. This requires a non-interactive interface permitting batch calculation with multiple computers. Data are extracted from the TomoTherapy(®) archive and processed using the computational task-management system GANGA. Doses are calculated for each fraction of radiotherapy using the daily megavoltage (MV) CT images. The calculated dose cube is saved as a digital imaging and communications in medicine RTDOSE object, which can then be read by utilities that calculate dose-volume histograms or dose surface maps. The rectum is delineated on daily MV images using an implementation of the Chan-Vese algorithm. On a cluster of up to 117 central processing units, dose cubes for all fractions of 151 patients took 12 days to calculate. Outlining the rectum on all slices and fractions on 151 patients took 7 h. We also present results of the Hounsfield unit (HU) calibration of TomoTherapy MV images, measured over an 8-year period, showing that the HU calibration has become less variable over time, with no large changes observed after 2011. We have developed a system for automatic dose recalculation of TomoTherapy dose distributions. This does not tie up the clinically needed planning system but can be run on a cluster of independent machines, enabling recalculation of delivered dose without user intervention. The use of a task management system for automation of dose calculation and outlining enables work to be scaled up to the level required for large studies.

  8. Differences between recalculated and original Dobson total ozone data from Hradec Kralove, Czechoslovakia, 1962-1990

    NASA Technical Reports Server (NTRS)

    Vanicek, Karel

    1994-01-01

    Backward reevaluation of long-term total ozone measurements from the Solar and Ozone Observatory of Czech Hydrometeorological Institute at Hradec Kralove, Czechoslovakia, was performed for the period 1962-1990. The homogenization was carried out with respect to the calibration level of the World Primary Standard Spectrophotometer No. 83 - WPSS by means of day-by-day recalculations of more than 25,000 individual measurements using the R-N tables reconstructed after international comparisons and regular standard lamp tests of the Dobson spectrophotometer No. 74. The results showed significant differences among the recalculated data and those original ones published in the bulletins Ozone Data for the World. In the period 1962-1979 they reached 10-19 D.U. (3.0-5.5%) for annual averages and even 26 D.U. (7.0%) for monthly averages of total ozone. Such differences exceed several times accuracy of measuring and can significantly influence character of trends of total ozone in Central Europe. Therefore the results from Hradec Kralove support the calls for reevaluation of all historical Dobson total ozone data sets at individual stations of Global Ozone Observing System.

  9. Modeling of convection, temperature distribution and dendritic growth in glass-fluxed nickel melts

    NASA Astrophysics Data System (ADS)

    Gao, Jianrong; Kao, Andrew; Bojarevics, Valdis; Pericleous, Koulis; Galenko, Peter K.; Alexandrov, Dmitri V.

    2017-08-01

    Melt flow is often quoted as the reason for a discrepancy between experiment and theory on dendritic growth kinetics at low undercoolings. But this flow effect is not justified for glass-fluxed melts where the flow field is weaker. In the present work, we modeled the thermal history, flow pattern and dendritic structure of a glass-fluxed nickel sample by magnetohydrodynamics calculations. First, the temperature distribution and flow structure in the molten and undercooled melt were simulated by reproducing the observed thermal history of the sample prior to solidification. Then the dendritic structure and surface temperature of the recalescing sample were simulated. These simulations revealed a large thermal gradient crossing the sample, which led to an underestimation of the real undercooling for dendritic growth in the bulk volume of the sample. By accounting for this underestimation, we recalculated the dendritic tip velocities in the glass-fluxed nickel melt using a theory of three-dimensional dendritic growth with convection and concluded an improved agreement between experiment and theory.

  10. Application of process analytical technology for monitoring freeze-drying of an amorphous protein formulation: use of complementary tools for real-time product temperature measurements and endpoint detection.

    PubMed

    Schneid, Stefan C; Johnson, Robert E; Lewis, Lavinia M; Stärtzel, Peter; Gieseler, Henning

    2015-05-01

    Process analytical technology (PAT) and quality by design have gained importance in all areas of pharmaceutical development and manufacturing. One important method for monitoring of critical product attributes and process optimization in laboratory scale freeze-drying is manometric temperature measurement (MTM). A drawback of this innovative technology is that problems are encountered when processing high-concentrated amorphous materials, particularly protein formulations. In this study, a model solution of bovine serum albumin and sucrose was lyophilized at both conservative and aggressive primary drying conditions. Different temperature sensors were employed to monitor product temperatures. The residual moisture content at primary drying endpoints as indicated by temperature sensors and batch PAT methods was quantified from extracted sample vials. The data from temperature probes were then used to recalculate critical product parameters, and the results were compared with MTM data. The drying endpoints indicated by the temperature sensors were not suitable for endpoint indication, in contrast to the batch methods endpoints. The accuracy of MTM Pice data was found to be influenced by water reabsorption. Recalculation of Rp and Pice values based on data from temperature sensors and weighed vials was possible. Overall, extensive information about critical product parameters could be obtained using data from complementary PAT tools. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  11. Stereotactic radiotherapy of intrapulmonary lesions: comparison of different dose calculation algorithms for Oncentra MasterPlan®.

    PubMed

    Troeller, Almut; Garny, Sylvia; Pachmann, Sophia; Kantz, Steffi; Gerum, Sabine; Manapov, Farkhad; Ganswindt, Ute; Belka, Claus; Söhn, Matthias

    2015-02-22

    The use of high accuracy dose calculation algorithms, such as Monte Carlo (MC) and Collapsed Cone (CC) determine dose in inhomogeneous tissue more accurately than pencil beam (PB) algorithms. However, prescription protocols based on clinical experience with PB are often used for treatment plans calculated with CC. This may lead to treatment plans with changes in field size (FS) and changes in dose to organs at risk (OAR), especially for small tumor volumes in lung tissue treated with SABR. We re-evaluated 17 3D-conformal treatment plans for small intrapulmonary lesions with a prescription of 60 Gy in fractions of 7.5 Gy to the 80% isodose. All treatment plans were initially calculated in Oncentra MasterPlan® using a PB algorithm and recalculated with CC (CCre-calc). Furthermore, a CC-based plan with coverage similar to the PB plan (CCcov) and a CC plan with relaxed coverage criteria (CCclin), were created. The plans were analyzed in terms of Dmean, Dmin, Dmax and coverage for GTV, PTV and ITV. Changes in mean lung dose (MLD), V10Gy and V20Gy were evaluated for the lungs. The re-planned CC plans were compared to the original PB plans regarding changes in total monitor units (MU) and average FS. When PB plans were recalculated with CC, the average V60Gy of GTV, ITV and PTV decreased by 13.2%, 19.9% and 41.4%, respectively. Average Dmean decreased by 9% (GTV), 11.6% (ITV) and 14.2% (PTV). Dmin decreased by 18.5% (GTV), 21.3% (ITV) and 17.5% (PTV). Dmax declined by 7.5%. PTV coverage correlated with PTV volume (p < 0.001). MLD, V10Gy, and V20Gy were significantly reduced in the CC plans. Both, CCcov and CCclin had significantly increased MUs and FS compared to PB. Recalculation of PB plans for small lung lesions with CC showed a strong decline in dose and coverage in GTV, ITV and PTV, and declined dose in the lung. Thus, switching from a PB algorithm to CC, while aiming to obtain similar target coverage, can be associated with application of more MU and extension of radiotherapy fields, causing greater OAR exposition.

  12. Fred: a GPU-accelerated fast-Monte Carlo code for rapid treatment plan recalculation in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Schiavi, A.; Senzacqua, M.; Pioli, S.; Mairani, A.; Magro, G.; Molinelli, S.; Ciocca, M.; Battistoni, G.; Patera, V.

    2017-09-01

    Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary s-1 on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes.

  13. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array.

    PubMed

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-12-30

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.

  14. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array

    PubMed Central

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-01-01

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective. PMID:28042828

  15. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    PubMed

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with < or =d individuals unvaccinated or lower if the coverage was set at the UT (pUT) to calculate beta (1-pUT) and the proportion of simulations with >d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha < or = 5% beta < or = 20% in the unclustered design; alpha < or = 10% beta < or = 25% when the lots were divided in five clusters. When the interval between UT and LT was larger than 10% (e.g. 15%), we were able to select precise LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  16. Re-Evaluation of the 1921 Peak Discharge at Skagit River near Concrete, Washington

    USGS Publications Warehouse

    Mastin, M.C.

    2007-01-01

    The peak discharge record at the U.S. Geological Survey (USGS) gaging station at Skagit River near Concrete, Washington, is a key record that has come under intense scrutiny by the scientific and lay person communities in the last 4 years. A peak discharge of 240,000 cubic feet per second for the flood on December 13, 1921, was determined in 1923 by USGS hydrologist James Stewart by means of a slope-area measurement. USGS then determined the peak discharges of three other large floods on the Skagit River (1897, 1909, and 1917) by extending the stage-discharge rating through the 1921 flood measurement. The 1921 estimate of peak discharge was recalculated by Flynn and Benson of the USGS after a channel roughness verification was completed based on the 1949 flood on the Skagit River. The 1949 recalculation indicated that the peak discharge probably was 6.2 percent lower than Stewart's original estimate but the USGS did not officially change the peak discharge from Stewart's estimate because it was not more than a 10-percent change (which is the USGS guideline for revising peak flows) and the estimate already had error bands of 15 percent. All these flood peaks are now being used by the U.S. Army Corps of Engineers to determine the 100-year flood discharge for the Skagit River Flood Study so any method to confirm or improve the 1921 peak discharge estimate is warranted. During the last 4 years, two floods have occurred on the Skagit River (2003, 2006) that has enabled the USGS to collect additional data, do further analysis, and yet again re-evaluate the 1921 peak discharge estimate. Since 1949, an island/bar in the study reach has reforested itself. This has complicated the flow hydraulics and made the most recent recalculation of the 1921 flood based on channel roughness verification that used 2003 and 2006 flood data less reliable. However, this recent recalculation did indicate that the original peak-discharge calculation by Stewart may be high, and it added to a body of evidence that indicates a revision in the 1921 peak discharge estimate is appropriate. The USGS has determined that a lower peak-discharge estimate (5.0 percent lower) similar to the 1949 estimates is most appropriate based on (1) a recalculation of the 1921 flood using a channel roughness verification from the 1949 flood data, (2) a recalculation of the 1921 flood using a channel roughness verification from 2003 and 2006 flood data, and (3) straight-line extension of the stage-discharge relation at the gage based on current-meter discharge measurements. Given the significance of the 1921 flood peak, revising the estimate is appropriate even though it is less than the 10-percent guideline established by the USGS for revision. Revising the peak is warranted because all work subsequent to 1921 point to the 1921 peak being lower than originally published.

  17. Charge Exchange in Slow Collisions of O+ with He

    NASA Astrophysics Data System (ADS)

    Zhao, L. B.; Joseph, D. C.; Saha, B. C.; Lebermann, H. P.; Funke, P.; Buenker, R. J.

    2009-03-01

    A comparative study is reported for the charge transfer in collisions of O^+ with He using the fully quantal and semiclassical molecular-orbital close-coupling (MOCC) approaches in the adiabatic representation. The electron capture processes O^+(^4S^o, ^2D^o, ^2P^o) + He -> O(^3P) + He^+ are recalculated. The semiclassical MOCC approach was examined by a detailed comparision of cross sections and transition probabilities from both the fully quantal and semiclassical MOCC approaches. The discrepancies reported previously between the semiclassical and the quantal MOCC cross sections may be attributed due to the insufficient step-size resolution of the semiclassical calculations. Our results are also compared with the experimental cross sections and found good agreements. This work is supported by NSF, CREST program (Grant#0630370).

  18. Platinum-gold nanoclusters as catalyst for direct methanol fuel cells.

    PubMed

    Giorgi, L; Giorgi, R; Gagliardi, S; Serra, E; Alvisi, M; Signore, M A; Piscopiello, E

    2011-10-01

    Nanosized platinum-gold alloys clusters have been deposited on gas diffusion electrode by sputter deposition. The deposits were characterized by FE-SEM, TEM and XPS in order to verify the formation of alloy nanoparticles and to study the influence of deposition technique on the nanomorphology. The deposition by sputtering process allowed a uniform distribution of metal particles on porous surface of carbon supports. Typical island growth mode was observed with the formation of a dispersed metal nanoclusters (mean size about 5 nm). Cyclic voltammetry was used to determine the electrochemical active surface and the electrocatalytic performance of the PtAu electrocatalysts for methanol oxidation reaction. The data were re-calculated in the form of mass specific activity (MSA). The sputter-catalyzed electrodes showed higher performance and stability compared to commercial catalysts.

  19. A fast - Monte Carlo toolkit on GPU for treatment plan dose recalculation in proton therapy

    NASA Astrophysics Data System (ADS)

    Senzacqua, M.; Schiavi, A.; Patera, V.; Pioli, S.; Battistoni, G.; Ciocca, M.; Mairani, A.; Magro, G.; Molinelli, S.

    2017-10-01

    In the context of the particle therapy a crucial role is played by Treatment Planning Systems (TPSs), tools aimed to compute and optimize the tratment plan. Nowadays one of the major issues related to the TPS in particle therapy is the large CPU time needed. We developed a software toolkit (FRED) for reducing dose recalculation time by exploiting Graphics Processing Units (GPU) hardware. Thanks to their high parallelization capability, GPUs significantly reduce the computation time, up to factor 100 respect to a standard CPU running software. The transport of proton beams in the patient is accurately described through Monte Carlo methods. Physical processes reproduced are: Multiple Coulomb Scattering, energy straggling and nuclear interactions of protons with the main nuclei composing the biological tissues. FRED toolkit does not rely on the water equivalent translation of tissues, but exploits the Computed Tomography anatomical information by reconstructing and simulating the atomic composition of each crossed tissue. FRED can be used as an efficient tool for dose recalculation, on the day of the treatment. In fact it can provide in about one minute on standard hardware the dose map obtained combining the treatment plan, earlier computed by the TPS, and the current patient anatomic arrangement.

  20. Rheo-SAXS investigation of shear-thinning behaviour of very anisometric repulsive disc-like clay suspensions.

    PubMed

    Philippe, A M; Baravian, C; Imperor-Clerc, M; De Silva, J; Paineau, E; Bihannic, I; Davidson, P; Meneau, F; Levitz, P; Michot, L J

    2011-05-18

    Aqueous suspensions of swelling clay minerals exhibit a rich and complex rheological behaviour. In particular, these repulsive systems display strong shear-thinning at very low volume fractions in both the isotropic and gel states. In this paper, we investigate the evolution with shear of the orientational distribution of aqueous clay suspensions by synchrotron-based rheo-SAXS experiments using a Couette device. Measurements in radial and tangential configurations were carried out for two swelling clay minerals of similar morphology and size, Wyoming montmorillonite and Idaho beidellite. The shear evolution of the small angle x-ray scattering (SAXS) patterns displays significantly different features for these two minerals. The detailed analysis of the angular dependence of the SAXS patterns in both directions provides the average Euler angles of the statistical effective particle in the shear plane. We show that for both samples, the average orientation is fully controlled by the local shear stress around the particle. We then apply an effective approach to take into account multiple hydrodynamic interactions in the system. Using such an approach, it is possible to calculate the evolution of viscosity as a function of shear rate from the knowledge of the average orientation of the particles. The viscosity thus recalculated almost perfectly matches the measured values as long as collective effects are not too important in the system.

  1. Calibration sets and the accuracy of vibrational scaling factors: A case study with the X3LYP hybrid functional

    NASA Astrophysics Data System (ADS)

    Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.

    2010-09-01

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  2. Calibration sets and the accuracy of vibrational scaling factors: a case study with the X3LYP hybrid functional.

    PubMed

    Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S

    2010-09-21

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  3. Integrated Assessment and Improvement of the Quality Assurance System for the Cosworth Casting Process

    NASA Astrophysics Data System (ADS)

    Yousif, Dilon

    The purpose of this study was to improve the Quality Assurance (QA) System at the Nemak Windsor Aluminum Plant (WAP). The project used Six Sigma method based on Define, Measure, Analyze, Improve, and Control (DMAIC). Analysis of in process melt at WAP was based on chemical, thermal, and mechanical testing. The control limits for the W319 Al Alloy were statistically recalculated using the composition measured under stable conditions. The "Chemistry Viewer" software was developed for statistical analysis of alloy composition. This software features the Silicon Equivalency (SiBQ) developed by the IRC. The Melt Sampling Device (MSD) was designed and evaluated at WAP to overcome traditional sampling limitations. The Thermal Analysis "Filters" software was developed for cooling curve analysis of the 3XX Al Alloy(s) using IRC techniques. The impact of low melting point impurities on the start of melting was evaluated using the Universal Metallurgical Simulator and Analyzer (UMSA).

  4. A model to relate wind tunnel measurements to open field odorant emissions from liquid area sources

    NASA Astrophysics Data System (ADS)

    Lucernoni, F.; Capelli, L.; Busini, V.; Sironi, S.

    2017-05-01

    Waste Water Treatment Plants are known to have significant emissions of several pollutants and odorants causing nuisance to the near-living population. One of the purposes of the present work is to study a suitable model to evaluate odour emissions from liquid passive area sources. First, the models describing volatilization under a forced convection regime inside a wind tunnel device, which is the sampling device that typically used for sampling on liquid area sources, were investigated. In order to relate the fluid dynamic conditions inside the hood to the open field and inside the hood a thorough study of the models capable of describing the volatilization phenomena of the odorous compounds from liquid pools was performed and several different models were evaluated for the open field emission. By means of experimental tests involving pure liquid acetone and pure liquid butanone, it was verified that the model more suitable to describe precisely the volatilization inside the sampling hood is the model for the emission from a single flat plate in forced convection and laminar regime, with a fluid dynamic boundary layer fully developed and a mass transfer boundary layer not fully developed. The proportionality coefficient for the model was re-evaluated in order to account for the specific characteristics of the adopted wind tunnel device, and then the model was related with the selected model for the open field thereby computing the wind speed at 10 m that would cause the same emission that is estimated from the wind tunnel measurement furthermore, the field of application of the proposed model was clearly defined for the considered models during the project, discussing the two different kinds of compounds commonly found in emissive liquid pools or liquid spills, i.e. gas phase controlled and liquid phase controlled compounds. Lastly, a discussion is presented comparing the presented approach for emission rates recalculation in the field, with other approaches possible, i.e. the ones relying on the recalculation of the wind speed at the emission level, instead of the wind speed that would cause in the open field the same emission that is measured with the hood.

  5. Contribution of Lattice Distortion to Solid Solution Strengthening in a Series of Refractory High Entropy Alloys

    NASA Astrophysics Data System (ADS)

    Chen, H.; Kauffmann, A.; Laube, S.; Choi, I.-C.; Schwaiger, R.; Huang, Y.; Lichtenberg, K.; Müller, F.; Gorr, B.; Christ, H.-J.; Heilmaier, M.

    2018-03-01

    We present an experimental approach for revealing the impact of lattice distortion on solid solution strengthening in a series of body-centered-cubic (bcc) Al-containing, refractory high entropy alloys (HEAs) from the Nb-Mo-Cr-Ti-Al system. By systematically varying the Nb and Cr content, a wide range of atomic size difference as a common measure for the lattice distortion was obtained. Single-phase, bcc solid solutions were achieved by arc melting and homogenization as well as verified by means of scanning electron microscopy and X-ray diffraction. The atomic radii of the alloying elements for determination of atomic size difference were recalculated on the basis of the mean atomic radii in and the chemical compositions of the solid solutions. Microhardness (μH) at room temperature correlates well with the deduced atomic size difference. Nevertheless, the mechanisms of microscopic slip lead to pronounced temperature dependence of mechanical strength. In order to account for this particular feature, we present a combined approach, using μH, nanoindentation, and compression tests. The athermal proportion to the yield stress of the investigated equimolar alloys is revealed. These parameters support the universality of this aforementioned correlation. Hence, the pertinence of lattice distortion for solid solution strengthening in bcc HEAs is proven.

  6. Large Eddy Simulation including population dynamics model for polydisperse droplet evolution

    NASA Astrophysics Data System (ADS)

    Aiyer, Aditya; Yang, Di; Chamecki, Marcelo; Meneveau, Charles

    2017-11-01

    Previous studies have shown that dispersion patterns of oil droplets in the ocean following a deep sea oil spill depend critically on droplet diameter. Hence predicting the evolution of the droplet size distribution is of critical importance for predicting macroscopic features of dispersion in the ocean. We adopt a population dynamics model of polydisperse droplet distributions for use in LES. We generalize a breakup model from Reynolds averaging approaches to LES in which the breakup is modeled as due to bombardment of droplets by turbulent eddies of various sizes. The breakage rate is expressed as an integral of a collision frequency times a breakage efficiency over all eddy sizes. An empirical fit to the integral is proposed in order to avoid having to recalculate the integral at every LES grid point and time step. The fit is tested by comparison with various stirred tank experiments. As a flow application for LES we consider a jet of bubbles and large droplets injected at the bottom of the tank. The advected velocity and concentration fields of the drops are described using an Eulerian approach. We study the change of the oil droplet distribution due to breakup caused by interaction of turbulence with the oil droplets. This research was made possible by a Grant from the Gulf of Mexico Research Initiative.

  7. Recalculation of the infrared continuum spectrum of the lowest energy triplet transitions in K2

    NASA Astrophysics Data System (ADS)

    Ligare, Martin; Edmonds, J. Brent

    1991-09-01

    The observation and identification of the spectra arising from transitions between the lowest energy triplet electronic states of diatomic potassium molecules were made by Huennekens et al. [J. Chem. Phys. 80, 4794 (1984)]. In this letter we recalculate theoretical spectra for these transitions using quasistatic line broadening theory and the recently published ab initio potential energy curves of Jeung and Ross [J. Phys. B 21, 1473 (1988)]. The calculated satellite of the 3Σ+g-3Σ+u transition occurs at 1.105 μm while the satellite is experimentally observed at 1.096 μm. This improved agreement both solidifies the original identification of Huennekens et al. and indicates the accuracy of the recent potential energy curves of Jeung and Ross for the low energy triplet states.

  8. Metamorphic reactions in mesosiderites - Origin of abundant phosphate and silica

    NASA Technical Reports Server (NTRS)

    Harlow, G. E.; Delaney, J. S.; Prinz, M.; Nehru, C. E.

    1982-01-01

    In light of a study of the Emery mesosiderite, it is determined that the high modal abundances of merrillite and tridymite in most mesosiderites are attributable to redox reactions between silicates and P-bearing Fe-Ni metal within a limited T-fO2 range at low pressure. The recalculated amounts of dissolved P and S in the metallic portion of Emery reduce the metal liquidus temperature to less than 1350 C, and the solidus to less than 800 C, so that the mixing of liquid metal with cold silicates would have resulted in silicate metamorphism rather than melting. This redox reaction and redistribution of components between metal and silicates illuminates the complexities of mesosiderite processing, with a view to the recalculation of their original components.

  9. Verification of 1921 peak discharge at Skagit River near Concrete, Washington, using 2003 peak-discharge data

    USGS Publications Warehouse

    Mastin, M.C.; Kresch, D.L.

    2005-01-01

    The 1921 peak discharge at Skagit River near Concrete, Washington (U.S. Geological Survey streamflow-gaging station 12194000), was verified using peak-discharge data from the flood of October 21, 2003, the largest flood since 1921. This peak discharge is critical to determining other high discharges at the gaging station and to reliably estimating the 100-year flood, the primary design flood being used in a current flood study of the Skagit River basin. The four largest annual peak discharges of record (1897, 1909, 1917, and 1921) were used to determine the 100-year flood discharge at Skagit River near Concrete. The peak discharge on December 13, 1921, was determined by James E. Stewart of the U.S. Geological Survey using a slope-area measurement and a contracted-opening measurement. An extended stage-discharge rating curve based on the 1921 peak discharge was used to determine the peak discharges of the three other large floods. Any inaccuracy in the 1921 peak discharge also would affect the accuracies of the three other largest peak discharges. The peak discharge of the 1921 flood was recalculated using the cross sections and high-water marks surveyed after the 1921 flood in conjunction with a new estimate of the channel roughness coefficient (n value) based on an n-verification analysis of the peak discharge of the October 21, 2003, flood. The n value used by Stewart for his slope-area measurement of the 1921 flood was 0.033, and the corresponding calculated peak discharge was 240,000 cubic feet per second (ft3/s). Determination of a single definitive water-surface profile for use in the n-verification analysis was precluded because of considerable variation in elevations of surveyed high-water marks from the flood on October 21, 2003. Therefore, n values were determined for two separate water-surface profiles thought to bracket a plausible range of water-surface slopes defined by high-water marks. The n value determined using the flattest plausible slope was 0.024 and the corresponding recalculated discharge of the 1921 slope-area measurement was 266,000 ft3/s. The n value determined using the steepest plausible slope was 0.032 and the corresponding recalculated discharge of the 1921 slope-area measurement was 215,000 ft3/s. The two recalculated discharges were 10.8 percent greater than (flattest slope) and 10.4 percent less than (steepest slope) the 1921 peak discharge of 240,000 ft3/s. The 1921 peak discharge was not revised because the average of the two recalculated discharges (240,500 ft3/s) is only 0.2 percent greater than the 1921 peak discharge.

  10. Problem in application carrying capacity approach for land allocation assessment in Indonesian municipal spatial planning: A case of Kutai Kartanegara Regency

    NASA Astrophysics Data System (ADS)

    Wijaya, I. N. S.; Rahadi, B.; Lusiana, N.; Maulidina, I.

    2017-06-01

    Urbanization in many countries, such as Indonesia, is commonly appeared as a dynamic population of developed areas. It is followed with reducing rural uses of land for improving urban land uses such as housing, industry, infrastructure, etc. in response to the growth of population. One may not be sufficiently considered by the urban planners and the decision makers, urbanization also means escalation of natural resources consumption that should be supported by the natural capacity of the area. In this situation, balancing approach as carrying capacity calculation in spatial planning is needed for sustainability. Indonesian Spatial Planning Law 26/2007 has already expressed about the balance approach in the system. Moreover, it strictly regulates the assessment and the permission system in controlling land development, especially for the conversion. However, the reductions over the rural uses of land, especially agriculture, are continuously occurred. Concerning the planning approach, this paper aims to disclose common insufficiency of carrying capacity considerations in Indonesian spatial planning practice. This paper describes common calculation weaknesses in projecting area for the urban development by recalculating the actual gap between supply and demand of agriculture land areas. Here, municipal spatial plan of Kutai Kartanegara Regency is utilized as single sample case to discuss. As the result, the recalculation shows that: 1) there are serious deficit status of agriculture land areas in order to fulfil the demanded agriculture production for the existed population, 2) some calculation of agriculture production may be miss-interpreted because of insufficient explanation toward the productivity of each agriculture commodity.

  11. VizieR Online Data Catalog: California-Kepler Survey (CKS). III. Planet radii (Fulton+, 2017)

    NASA Astrophysics Data System (ADS)

    Fulton, B. J.; Petigura, E. A.; Howard, A. W.; Isaacson, H.; Marcy, G. W.; Cargile, P. A.; Hebb, L.; Weiss, L. M.; Johnson, J. A.; Morton, T. D.; Sinukoff, E.; Crossfield, I. J. M.; Hirsch, L. A.

    2017-11-01

    We adopt the stellar sample and the measured stellar parameters from the California-Kepler Survey (CKS) program (Petigura et al. 2017, Cat. J/AJ/154/107; Paper I). The measured values of Teff, logg, and [Fe/H] are based on a detailed spectroscopic characterization of Kepler Object of Interest (KOI) host stars using observations from Keck/HIRES. In Johnson et al. 2017 (Cat J/AJ/154/108; Paper II), we associated those stellar parameters from Paper I to Dartmouth isochrones (Dotter et al. 2008ApJS..178...89D) to derive improved stellar radii and masses, allowing us to recalculate planetary radii using the light-curve parameters from Mullally et al. 2015 (Cat. J/ApJS/217/31). (1 data file).

  12. 20 CFR 404.290 - Recalculations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... subpart N of this part) and for individuals interned during World War II (see subpart K of this part), not... account— (1) Earnings (including compensation for railroad service) incorrectly included or excluded in...

  13. 20 CFR 404.290 - Recalculations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... subpart N of this part) and for individuals interned during World War II (see subpart K of this part), not... account— (1) Earnings (including compensation for railroad service) incorrectly included or excluded in...

  14. 20 CFR 404.290 - Recalculations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... subpart N of this part) and for individuals interned during World War II (see subpart K of this part), not... account— (1) Earnings (including compensation for railroad service) incorrectly included or excluded in...

  15. 20 CFR 404.290 - Recalculations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... subpart N of this part) and for individuals interned during World War II (see subpart K of this part), not... account— (1) Earnings (including compensation for railroad service) incorrectly included or excluded in...

  16. 40 CFR 60.615 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Air Oxidation Unit Processes § 60.615 Reporting and recordkeeping requirements. (a) Each owner or... of recovery equipment or air oxidation reactors; (2) Any recalculation of the TRE index value...

  17. 40 CFR 60.615 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Air Oxidation Unit Processes § 60.615 Reporting and recordkeeping requirements. (a) Each owner or... of recovery equipment or air oxidation reactors; (2) Any recalculation of the TRE index value...

  18. 40 CFR 60.615 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Air Oxidation Unit Processes § 60.615 Reporting and recordkeeping requirements. (a) Each owner or... of recovery equipment or air oxidation reactors; (2) Any recalculation of the TRE index value...

  19. 40 CFR 60.615 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Air Oxidation Unit Processes § 60.615 Reporting and recordkeeping requirements. (a) Each owner or... of recovery equipment or air oxidation reactors; (2) Any recalculation of the TRE index value...

  20. SU-F-BRD-16: Under Dose Regions Recalculated by Monte Carlo Cannot Predict the Local Failure for NSCLC Patients Treated with SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Cherian, S; Stephans, K

    2014-06-15

    Purpose: To investigate whether Monte Carlo (MC) recalculated dose distributions can predict the geometric location of the recurrence for nonsmall cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT). Methods: Thirty NSCLC patients with local recurrence were retrospectively selected for this study. The recurred gross target volumes (rGTV) were delineated on the follow-up CT/PET images and then rigidly transferred via imaging fusion to the original planning CTs. Failure pattern was defined according to the overlap between the rGTV and planning GTV (pGTV) as: (a) in-field failure (≥80%), (b) marginal failure (20%–80%), and (c) out-of-field failure (≤20%). All clinicalmore » plans were calculated initially with pencil beam (PB) with or without heterogeneity correction dependent of protocols. These plans were recalculated with MC with heterogeneity correction. Because of non-uniform dose distributions in the rGTVs, the rGTVs were further divided into four regions: inside the pGTV (GTVin), inside the PTV (PTVin), outside the pGTV (GTVout), and outside the PTV (PTVout). The mean doses to these regions were reported and analyzed separately. Results: Among 30 patients, 10 patients had infield recurrences, 15 marginal and 5 out-of-field failures. With MC calculations, D95 and D99 of the PTV were reduced by (10.6 ± 7.4)% and (11.7 ± 7.9)%. The average MC calculated mean doses of GTVin, GTVout, PTVin and PTVout were 48.2 ± 5.3 Gy, 48.2 ± 5.5 Gy, 46.3 ± 6.2 Gy and 46.6 ± 5.6 Gy, respectively. No significant dose differences between GTVin and GTVout (p=0.65), PTVin and PTVout (p=0.19) were observed, using the paired students t-test. Conclusion: Although the PB calculations underestimated the tumor target doses, the geometric location of the recurrence did not correlate with the mean doses of subsections of the recurrent GTV. Under dose regions recalculated by MC cannot predict the local failure for NSCLC patients treated with SBRT.« less

  1. SU-F-T-377: Monte Carlo Re-Evaluation of Volumetric-Modulated Arc Plans of Advanced Stage Nasopharygeal Cancers Optimized with Convolution-Superposition Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, K; Leung, R; Law, G

    Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam datasetmore » as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly with MC dose engine.« less

  2. 20 CFR 404.290 - Recalculations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... primary amount, we refigure it under the same method we used in the first computation by taking into... available at the time of the first computation; (3) Correction of clerical or mathematical errors; or (4...

  3. Method for determining formation quality factor from well log data and its application to seismic reservoir characterization

    DOEpatents

    Walls, Joel; Taner, M. Turhan; Dvorkin, Jack

    2006-08-08

    A method for seismic characterization of subsurface Earth formations includes determining at least one of compressional velocity and shear velocity, and determining reservoir parameters of subsurface Earth formations, at least including density, from data obtained from a wellbore penetrating the formations. A quality factor for the subsurface formations is calculated from the velocity, the density and the water saturation. A synthetic seismogram is calculated from the calculated quality factor and from the velocity and density. The synthetic seismogram is compared to a seismic survey made in the vicinity of the wellbore. At least one parameter is adjusted. The synthetic seismogram is recalculated using the adjusted parameter, and the adjusting, recalculating and comparing are repeated until a difference between the synthetic seismogram and the seismic survey falls below a selected threshold.

  4. Influence of CT contrast agent on dose calculation of intensity modulated radiation therapy plan for nasopharyngeal carcinoma.

    PubMed

    Lee, F K-H; Chan, C C-L; Law, C-K

    2009-02-01

    Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.

  5. [An attempt for standardization of serum CA19-9 levels, in order to dissolve the gap between three different methods].

    PubMed

    Hayashi, Kuniki; Hoshino, Tadashi; Yanai, Mitsuru; Tsuchiya, Tatsuyuki; Kumasaka, Kazunari; Kawano, Kinya

    2004-06-01

    It is well known that serious method-related differences exist in results of serum CA19-9, and the necessity of standardization has been pointed out. In this study, differences of serum tumor marker CA19-9 levels obtained by various immunoassay kits (CLEIA, FEIA, LPIA and RIA) were evaluated in sixty-seven clinical samples and five calibrators and the possibility to improve the inter-methodological differences were observed not only for clinical samples but also for calibrators. We supposed an assumed standard material using by a calibrator. We calculated the serum levels of CA19-9 when using the assumed standard material for three different measurement methods. We approximate the CA19-9 values using by this method. It is suggested that the obtained CA19-9 values could be approximated by recalculation with the assumed standard material would be able to correct between-method and between-laboratory discrepancies in particular systematic errors.

  6. Genome-scale estimate of the metabolic turnover of E. Coli from the energy balance analysis

    NASA Astrophysics Data System (ADS)

    De Martino, D.

    2016-02-01

    In this article the notion of metabolic turnover is revisited in the light of recent results of out-of-equilibrium thermodynamics. By means of Monte Carlo methods we perform an exact sampling of the enzymatic fluxes in a genome scale metabolic network of E. Coli in stationary growth conditions from which we infer the metabolites turnover times. However the latter are inferred from net fluxes, and we argue that this approximation is not valid for enzymes working nearby thermodynamic equilibrium. We recalculate turnover times from total fluxes by performing an energy balance analysis of the network and recurring to the fluctuation theorem. We find in many cases values one of order of magnitude lower, implying a faster picture of intermediate metabolism.

  7. 50 CFR 80.38 - May the Service recalculate an apportionment if an agency submits revised data?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE AND SPORT... DINGELL-JOHNSON SPORT FISH RESTORATION ACTS Certification of License Holders § 80.38 May the Service...

  8. 40 CFR 65.67 - Reporting provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., organic HAP or TOC concentration, and/or TRE index value required under § 65.63(f) and recorded under § 65... 2B process vent, the organic HAP or TOC concentration of the vent stream is recalculated according to...

  9. Analysis of temperature rise for piezoelectric transformer using finite-element method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2006-08-01

    Analysis of heat problem and temperature field of a piezoelectric transformer, operated at steady-state conditions, is described. The resonance frequency of the transformer is calculated from impedance and electrical gain analysis using a finite-element method. Mechanical displacement and electric potential of the transformer at the calculated resonance frequency are used to calculate the loss distribution of the transformer. Temperature distribution using discretized heat transfer equation is calculated from the obtained losses of the transformer. Properties of the piezoelectric material, dependent on the temperature field, are measured to recalculate the losses, temperature distribution, and new resonance characteristics of the transformer. Iterative method is adopted to recalculate the losses and resonance frequency due to the changes of the material constants from temperature increase. Computed temperature distributions and new resonance characteristics of the transformer at steady-state temperature are verified by comparison with experimental results.

  10. Temperature measurement in a gas turbine engine combustor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul

    A method and system for determining a temperature of a working gas passing through a passage to a turbine section of a gas turbine engine. The method includes identifying an acoustic frequency at a first location in the engine upstream from the turbine section, and using the acoustic frequency for determining a first temperature value at the first location that is directly proportional to the acoustic frequency and a calculated constant value. A second temperature of the working gas is determined at a second location in the engine and, using the second temperature, a back calculation is performed to determinemore » a temperature value for the working gas at the first location. The first temperature value is compared to the back calculated temperature value to change the calculated constant value to a recalculated constant value. Subsequent first temperature values at the first location may be determined based on the recalculated constant value.« less

  11. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579

  12. Minimisation of Signal Intensity Differences in Distortion Correction Approaches of Brain Magnetic Resonance Diffusion Tensor Imaging.

    PubMed

    Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol

    2018-04-12

    To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.

  13. United States Postal Service Pension Obligation Recalculation and Restoration Act of 2011

    THOMAS, 112th Congress

    Rep. Lynch, Stephen F. [D-MA-9

    2011-04-04

    House - 04/08/2011 Referred to the Subcommittee on Federal Workforce, U.S. Postal Service, and Labor Policy . (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  14. United States Postal Service Pension Obligation Recalculation and Restoration Act of 2011

    THOMAS, 112th Congress

    Rep. Thompson, Bennie G. [D-MS-2

    2011-10-12

    House - 11/02/2011 Referred to the Subcommittee on Federal Workforce, U.S. Postal Service, and Labor Policy . (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  15. 31 CFR 150.6 - Notice and payment of assessments.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... through www.pay.gov or successor Web site. No later than the later of 30 days prior to the payment date... Department in calculating that company's total assessable assets, the Department may at any time re-calculate...

  16. 31 CFR 150.6 - Notice and payment of assessments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... through www.pay.gov or successor Web site. No later than the later of 30 days prior to the payment date... Department in calculating that company's total assessable assets, the Department may at any time re-calculate...

  17. 31 CFR 150.6 - Notice and payment of assessments.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... through www.pay.gov or successor Web site. No later than the later of 30 days prior to the payment date... Department in calculating that company's total assessable assets, the Department may at any time re-calculate...

  18. 20 CFR 1001.150 - Method of calculating State basic grant awards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... will be retained separately from the funds retained for TAP workload and other exigencies, as... funding for TAP workload and other exigencies, a compelling reason to recalculate would exist. In that... is available for TAP workload and other exigencies. ...

  19. A challenging hysteresis operator for the simulation of Goss-textured magnetic materials

    NASA Astrophysics Data System (ADS)

    Cardelli, Ermanno; Faba, Antonio; Laudani, Antonino; Pompei, Michele; Quondam Antonio, Simone; Fulginei, Francesco Riganti; Salvini, Alessandro

    2017-06-01

    A new hysteresis operator for the simulation of Goss-textured ferromagnets is here defined. The operator is derived from the classic Stoner-Wohlfarth model, where the anisotropy energy is assumed to be cubic instead of uniaxial, in order to reproduce the magnetic behavior of Goss textured ferromagnetic materials, such as grain-oriented Fe-Si alloys, Ni-Fe alloys, and Ni-Co alloys. A vector hysteresis model based on a single hysteresis operator is then implemented and used for the prediction of the rotational magnetizations that have been measured in a sample of grain-oriented electrical steel. This is especially promising for FEM based calculations, where the magnetization state in each point must be recalculated at each time step. Finally, the computed loops, as well as the magnetic losses, are compared to the measured data.

  20. Severe inbreeding depression in a wild wolf (Canis lupus) population.

    PubMed

    Liberg, Olof; Andrén, Henrik; Pedersen, Hans-Christian; Sand, Håkan; Sejberg, Douglas; Wabakken, Petter; Kesson, Mikael; Bensch, Staffan

    2005-03-22

    The difficulty of obtaining pedigrees for wild populations has hampered the possibility of demonstrating inbreeding depression in nature. In a small, naturally restored, wild population of grey wolves in Scandinavia, founded in 1983, we constructed a pedigree for 24 of the 28 breeding pairs established in the period 1983-2002. Ancestry for the breeding animals was determined through a combination of field data (snow tracking and radio telemetry) and DNA microsatellite analysis. The population was founded by only three individuals. The inbreeding coefficient F varied between 0.00 and 0.41 for wolves born during the study period. The number of surviving pups per litter during their first winter after birth was strongly correlated with inbreeding coefficients of pups (R2=0.39, p<0.001). This inbreeding depression was recalculated to match standard estimates of lethal equivalents (2B), corresponding to 6.04 (2.58-9.48, 95% CI) litter-size-reducing equivalents in this wolf population.

  1. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  2. The truly remarkable universality of half a standard deviation: confirmation through another look.

    PubMed

    Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W

    2004-10-01

    In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.

  3. Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun

    2008-12-15

    Dose calculation for thoracic radiotherapy is commonly performed on a free-breathing helical CT despite artifacts caused by respiratory motion. Four-dimensional computed tomography (4D-CT) is one method to incorporate motion information into the treatment planning process. Some centers now use the respiration-averaged CT (RACT), the pixel-by-pixel average of the ten phases of 4D-CT, for dose calculation. This method, while sparing the tedious task of 4D dose calculation, still requires 4D-CT technology. The authors have recently developed a means to reconstruct RACT directly from unsorted cine CT data from which 4D-CT is formed, bypassing the need for a respiratory surrogate. Using RACTmore » from cine CT for dose calculation may be a means to incorporate motion information into dose calculation without performing 4D-CT. The purpose of this study was to determine if RACT from cine CT can be substituted for RACT from 4D-CT for the purposes of dose calculation, and if increasing the cine duration can decrease differences between the dose distributions. Cine CT data and corresponding 4D-CT simulations for 23 patients with at least two breathing cycles per cine duration were retrieved. RACT was generated four ways: First from ten phases of 4D-CT, second, from 1 breathing cycle of images, third, from 1.5 breathing cycles of images, and fourth, from 2 breathing cycles of images. The clinical treatment plan was transferred to each RACT and dose was recalculated. Dose planes were exported at orthogonal planes through the isocenter (coronal, sagittal, and transverse orientations). The resulting dose distributions were compared using the gamma ({gamma}) index within the planning target volume (PTV). Failure criteria were set to 2%/1 mm. A follow-up study with 50 additional lung cancer patients was performed to increase sample size. The same dose recalculation and analysis was performed. In the primary patient group, 22 of 23 patients had 100% of points within the PTV pass {gamma} criteria. The average maximum and mean {gamma} indices were very low (well below 1), indicating good agreement between dose distributions. Increasing the cine duration generally increased the dose agreement. In the follow-up study, 49 of 50 patients had 100% of points within the PTV pass the {gamma} criteria. The average maximum and mean {gamma} indices were again well below 1, indicating good agreement. Dose calculation on RACT from cine CT is negligibly different from dose calculation on RACT from 4D-CT. Differences can be decreased further by increasing the cine duration of the cine CT scan.« less

  4. SU-E-J-101: Retroactive Calculation of TLD and Film Dose in Anthropomorphic Phantom as Assessment of Updated TPS Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alkhatib, H; Oves, S

    Purpose: To demonstrate a quick and comprehensive method verifying the accuracy of the updated dose model by recalculating dose distribution in an anthropomorphic phantom with a new version of the TPS and comparing the results to measured values. Methods: CT images and IMRT plan of an RPC anthropomorphic head phantom, previously calculated by Pinnacle 9.0, was re-computed using Pinnacle 9.2 and 9.6. The dosimeters within the phantom include four TLD capsules representing a primary PTV, two TLD capsules representing a secondary PTV, and two TLD capsules representing an organ at risk. Also included were three sheets of Gafchromic film. Performancemore » of the updated TPS version was assessed by recalculating point doses and dose profiles corresponding to TLD and film position respectively and then comparing the results to reported values by the RPC. Results: Comparing calculated doses to reported measured doses from the RPC yielded an average disagreement of 1.48%, 2.04% and 2.10% for versions 9.0, 9.2, 9.6 respectively. Computed doses points all meet the RPC's passing criteria with the exception of the point representing the superior organ at risk in version 9.6. However, qualitative analysis of the recalculated dose profiles showed improved agreement with those of the RPC, especially in the penumbra region. Conclusion: This work has demonstrated the calculation results of Pinnacle 9.2 and 9.6 vs 9.0 version. Additionally, this study illustrates a method for the user to gain confidence upgrade to a newer version of the treatment planning system.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Saenz, D

    Purpose: Stereotactic radiosurgery (SRS) outcomes are related to the delivered dose to the target and to surrounding tissue. We have commissioned a Monte Carlo based dose calculation algorithm to recalculated the delivered dose planned using pencil beam calculation dose engine. Methods: Twenty consecutive previously treated patients have been selected for this study. All plans were generated using the iPlan treatment planning system (TPS) and calculated using the pencil beam algorithm. Each patient plan consisted of 1 to 3 targets and treated using dynamically conformal arcs or intensity modulated beams. Multi-target treatments were delivered using multiple isocenters, one for each target.more » These plans were recalculated for the purpose of this study using a single isocenter. The CT image sets along with the plan, doses and structures were DICOM exported to Monaco TPS and the dose was recalculated using the same voxel resolution and monitor units. Benchmark data was also generated prior to patient calculations to assess the accuracy of the two TPS against measurements using a micro ionization chamber in solid water. Results: Good agreement, within −0.4% for Monaco and +2.2% for iPlan were observed for measurements in water phantom. Doses in patient geometry revealed up to 9.6% differences for single target plans and 9.3% for multiple-target-multiple-isocenter plans. The average dose differences for multi-target-single-isocenter plans were approximately 1.4%. Similar differences were observed for the OARs and integral dose. Conclusion: Accuracy of the beam is crucial for the dose calculation especially in the case of small fields such as those used in SRS treatments. A superior dose calculation algorithm such as Monte Carlo, with properly commissioned beam models, which is unaffected by the lack of electronic equilibrium should be preferred for the calculation of small fields to improve accuracy.« less

  6. Enantioseparation of omeprazole--effect of different packing particle size on productivity.

    PubMed

    Enmark, Martin; Samuelsson, Jörgen; Forssén, Patrik; Fornstedt, Torgny

    2012-06-01

    Enantiomeric separation of omeprazole has been extensively studied regarding both product analysis and preparation using several different chiral stationary phases. In this study, the preparative chiral separation of omeprazole is optimized for productivity using three different columns packed with amylose tris (3,5-dimethyl phenyl carbamate) coated macroporous silica (5, 10 and 25 μm) with a maximum allowed pressure drop ranging from 50 to 400 bar. This pressure range both covers low pressure process systems (50-100 bar) and investigates the potential for allowing higher pressure limits in preparative applications in a future. The process optimization clearly show that the larger 25 μm packing material show higher productivity at low pressure drops whereas with increasing pressure drops the smaller packing materials have substantially higher productivity. Interestingly, at all pressure drops, the smaller packing material result in lower solvent consumption (L solvent/kg product); the higher the accepted pressure drop, the larger the gain in reduced solvent consumption. The experimental adsorption isotherms were not identical for the different packing material sizes; therefore all calculations were recalculated and reevaluated assuming identical adsorption isotherms (with the 10 μm isotherm as reference) which confirmed the trends regarding productivity and solvent consumption. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. 78 FR 70917 - Certain New Pneumatic Off-the-Road Tires From the People's Republic of China: Notice of Decision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ...), the Department is notifying the public that the final CIT judgment in this case is not in harmony with... methodological and calculation issues from the Final Determination.\\5\\ On remand, the Department recalculated the...

  8. 78 FR 76033 - Truth in Lending (Regulation Z)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-16

    ... BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 Truth in Lending (Regulation Z) AGENCY... Card Accountability Responsibility and Disclosure Act of 2009 (CARD Act) and the Home Ownership and... re-calculated annually using the Consumer Price Index for Urban Wage Earners and Clerical Workers...

  9. Air Pollution and Human Health

    ERIC Educational Resources Information Center

    Lave, Lester B.; Seskin, Eugene P.

    1970-01-01

    Reviews studies statistically relating air pollution to mortality and morbidity rates for respiratory, and cardiovascular diseases, cancer and infant mortality. Some data recalculated. Estimates 50 percent air pollution reduction will save 4.5 percent (2080 million dollars per year) of all economic loss (hospitalization, income loss) associated…

  10. 34 CFR 685.301 - Origination of a loan by a Direct Loan Program school.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... originating a loan to cover the cost of attendance in a study abroad program and has a cohort default rate... in a program of study with less than a full academic year remaining, the school need not recalculate...

  11. 34 CFR 685.301 - Origination of a loan by a Direct Loan Program school.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... originating a loan to cover the cost of attendance in a study abroad program and has a cohort default rate... in a program of study with less than a full academic year remaining, the school need not recalculate...

  12. 34 CFR 685.301 - Origination of a loan by a Direct Loan Program school.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... originating a loan to cover the cost of attendance in a study abroad program and has a cohort default rate... in a program of study with less than a full academic year remaining, the school need not recalculate...

  13. 34 CFR 685.301 - Origination of a loan by a Direct Loan Program school.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... originating a loan to cover the cost of attendance in a study abroad program and has a cohort default rate... in a program of study with less than a full academic year remaining, the school need not recalculate...

  14. A new method for motion capture of the scapula using an optoelectronic tracking device: a feasibility study.

    PubMed

    Šenk, Miroslav; Chèze, Laurence

    2010-06-01

    Optoelectronic tracking systems are rarely used in 3D studies examining shoulder movements including the scapula. Among the reasons is the important slippage of skin markers with respect to scapula. Methods using electromagnetic tracking devices are validated and frequently applied. Thus, the aim of this study was to develop a new method for in vivo optoelectronic scapular capture dealing with the accepted accuracy issues of validated methods. Eleven arm positions in three anatomical planes were examined using five subjects in static mode. The method was based on local optimisation, and recalculation procedures were made using a set of five scapular surface markers. The scapular rotations derived from the recalculation-based method yielded RMS errors comparable with the frequently used electromagnetic scapular methods (RMS up to 12.6° for 150° arm elevation). The results indicate that the present method can be used under careful considerations for 3D kinematical studies examining different shoulder movements.

  15. Diagnostic Accuracy of Fall Risk Assessment Tools in People With Diabetic Peripheral Neuropathy

    PubMed Central

    Pohl, Patricia S.; Mahnken, Jonathan D.; Kluding, Patricia M.

    2012-01-01

    Background Diabetic peripheral neuropathy affects nearly half of individuals with diabetes and leads to increased fall risk. Evidence addressing fall risk assessment for these individuals is lacking. Objective The purpose of this study was to identify which of 4 functional mobility fall risk assessment tools best discriminates, in people with diabetic peripheral neuropathy, between recurrent “fallers” and those who are not recurrent fallers. Design A cross-sectional study was conducted. Setting The study was conducted in a medical research university setting. Participants The participants were a convenience sample of 36 individuals between 40 and 65 years of age with diabetic peripheral neuropathy. Measurements Fall history was assessed retrospectively and was the criterion standard. Fall risk was assessed using the Functional Reach Test, the Timed “Up & Go” Test, the Berg Balance Scale, and the Dynamic Gait Index. Sensitivity, specificity, positive and negative likelihood ratios, and overall diagnostic accuracy were calculated for each fall risk assessment tool. Receiver operating characteristic curves were used to estimate modified cutoff scores for each fall risk assessment tool; indexes then were recalculated. Results Ten of the 36 participants were classified as recurrent fallers. When traditional cutoff scores were used, the Dynamic Gait Index and Functional Reach Test demonstrated the highest sensitivity at only 30%; the Dynamic Gait Index also demonstrated the highest overall diagnostic accuracy. When modified cutoff scores were used, all tools demonstrated improved sensitivity (80% or 90%). Overall diagnostic accuracy improved for all tests except the Functional Reach Test; the Timed “Up & Go” Test demonstrated the highest diagnostic accuracy at 88.9%. Limitations The small sample size and retrospective fall history assessment were limitations of the study. Conclusions Modified cutoff scores improved diagnostic accuracy for 3 of 4 fall risk assessment tools when testing people with diabetic peripheral neuropathy. PMID:22836004

  16. Rootstock and vineyard floor management influence on 'Cabernet Sauvignon' grape yeast assimilable nitrogen (YAN).

    PubMed

    Lee, Jungmin; Steenwerth, Kerri L

    2011-08-01

    This is a study on the influence that two rootstocks (110R, high vigour; 420A, low vigour) and three vineyard floor management regimes (tilled resident vegetation - usual practise in California, and barley cover crops that were either mowed or tilled) had upon grape nitrogen-containing compounds (mainly ammonia and free amino acids recalculated as YAN), sugars, and organic acids in 'Cabernet Sauvignon' clone 8. A significant difference was observed for some of the free amino acids between rootstocks. In both sample preparation methods (juiced or chemically extracted), 110R rootstock grapes were significantly higher in SER, GLN, THR, ARG, VAL, ILE, LEU, and YAN than were 420A rootstock grapes. Differences in individual free amino acid profiles and concentrations were observed between the two sample preparations, which indicate that care should be taken when comparing values from dissimilar methods. No significant differences among vineyard floor treatments were detected, which suggests that mowing offers vineyard managers a sustainable practise, alternative to tilling, without negatively affecting grape nitrogen compounds, sugars, or organic acids. Published by Elsevier Ltd.

  17. 40 CFR 66.72 - Additional payment or reimbursement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... calculation as provided in the Technical Support Document and the Manual, together with data necessary for... PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Compliance and Final... calculated; (2) The revised penalty is incorrect and has been recalculated based on the data provided by the...

  18. 40 CFR 60.665 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Distillation Operations § 60.665 Reporting and recordkeeping requirements. (a) Each owner or operator subject... of recovery equipment or a distillation unit; (2) Any recalculation of the TRE index value performed... distillation process unit containing the affected facility. These must be reported as soon as possible after...

  19. 40 CFR 60.665 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Distillation Operations § 60.665 Reporting and recordkeeping requirements. (a) Each owner or operator subject... of recovery equipment or a distillation unit; (2) Any recalculation of the TRE index value performed... distillation process unit containing the affected facility. These must be reported as soon as possible after...

  20. 40 CFR 98.468 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... design capacity, the calculation must include a site-specific density. If the design capacity is within... process that can reasonably be expected to change the site-specific waste density, the site-specific waste density must be redetermined and the design capacity must be recalculated based on the new waste density...

  1. 40 CFR 98.468 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... design capacity, the calculation must include a site-specific density. If the design capacity is within... process that can reasonably be expected to change the site-specific waste density, the site-specific waste density must be redetermined and the design capacity must be recalculated based on the new waste density...

  2. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine.

    PubMed

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, - 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3mm criteria. The mean and standard deviation of pixels passing gamma tolerance for XiO-generated IMRT plans was 96.1 ± 1.3, 96.6 ± 1.2, and 96.0 ± 1.5 in axial, coronal, and sagittal planes respectively. Corresponding results for Pinnacle-generated IMRT plans were 97.1 ± 1.5, 96.4 ± 1.2, and 96.5 ± 1.3 in axial, coronal, and sagittal planes respectively. © 2013 American Association of Medical Dosimetrists.

  3. 77 FR 30411 - Connect America Fund; High-Cost Universal Service Support

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-23

    ... ``benchmarks'' for high cost loop support (HCLS). The methodology the Bureau adopts, builds on the analysis... to support continued broadband investment. The methodology the Bureau adopts today is described in... methodology, HCLS will be recalculated to account for the additional support available under the overall cap...

  4. 78 FR 61171 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... (RR) RB211-535E4-B-37 series turbofan engines. This AD requires removal of affected parts using a...-B-37 series turbofan engines. (d) Unsafe Condition This AD was prompted by recalculating the lives...

  5. 24 CFR 206.26 - Change in payment option.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... to make the advance or if no line of credit exists, future monthly payments shall be recalculated for... (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND... using all of the funds set aside for repairs, the mortgagee shall transfer the remaining amount to a...

  6. 24 CFR 206.26 - Change in payment option.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... to make the advance or if no line of credit exists, future monthly payments shall be recalculated for... (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND... using all of the funds set aside for repairs, the mortgagee shall transfer the remaining amount to a...

  7. Calibration of hyperspectral data aviation mode according with accompanying ground-based measurements of standard surfaces of observed scenes

    NASA Astrophysics Data System (ADS)

    Ostrikov, V. N.; Plakhotnikov, O. V.

    2014-12-01

    Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.

  8. 76 FR 26805 - Medicare Program; Hospice Wage Index for Fiscal Year 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-09

    ... returned to Medicare by the hospice. CMS' contractors calculate each hospice's aggregate cap every year... Medicare contractor recalculate the hospice's aggregate cap using longer timeframes. Option 2: In this... individual hospices to request the Medicare contractor to apply a patient-by-patient proportional methodology...

  9. Late Pleistocene glacial chronology of the Retezat Mts, Southern Carpathians, using 10Be exposure ages

    NASA Astrophysics Data System (ADS)

    Ruszkiczay-Rüdiger, Zsófia; Kern, Zoltán; Urdea, Petru; Braucher, Régis; Madarász, Balázs; Schimmelpfennig, Irene

    2015-04-01

    Our knowledge on the timing of glacial advances in the Southern Carpathians is limited. Recently, some attempts have been made to develop an improved temporal framework for the glaciations of the region using cosmogenic 10Be exposure dating. However, glacial chronology of the Romanian Carpathians remains contradictory. E.g. the timing of the maximum ice advance appears to be asynchronous within the area and also with other dated glacial events in Europe. Main objective of our study is to utilize cosmogenic in situ produced 10Be dating to disentangle the contradictions of the Southern Carpathian Late Pleistocene glacial chronology. Firstly, previously published 10Be data are recalculated in accordance with the new half-life, standardization and production rate of 10Be. The recalculated 10Be exposure ages of the second largest (M2) moraines in the Retezat Mts. appear to be ca. 19-24% older than exposure ages calculated by Reuther et al. (2007, Quat. Int. 164-165, 151-169). This contradicts the earlier conclusions suggesting post LGM age of M2 glacial advance and suggests that M2 moraines can be connected to the end of the LGM with final stabilization possibly at the beginning of the Late Glacial. We emphasize that it is ambiguous to correlate directly the exposure-dated glacier chronologies with millennial scale climate changes due to uncertainties in sample collection and in computation of exposure ages from measured nuclide concentrations. New 10Be samples were collected in order to determine the 10Be exposure age of moraines outside the most prominent generation (M2) including the largest and oldest moraine (M1) and the landforms connected to the smallest ice advances (M4), which remained undated so far. The new exposure ages of M2 moraines are well in harmony with the recalculated ages of Reuther at al. (2007). 10Be exposure age of boulders on the smallest moraine suggest that the last glaciers disappeared in the area during the Late Glacial, indicating no glaciation during the Younger Dryas and Holocene. Previous works, based on geomorphologic analogies and pedological properties suggested that the M1 ice advance was older than LGM, and possibly occurred during the MIS4. Our 10Be exposure dating provided LGM ages for boulders on the M1 side moraine. It is question of further research whether these ages show the time when the glacier abandoned the moraine or they only indicate an LGM erosional event affecting an older moraine. If we accept the LGM age of maximum ice extent (M1), our 10Be exposure age data enables the calculation of a mean glacier retreat rate of 1.3 m/a for the period between M1 and M4 (21.4 to 13.6ka). Alternatively, considering only the oldest 10Be exposure age of the M2 moraine, the M2 to M4 (20.2-13.6ka) glacier retreat rate was slightly lower: 1.1 m/a. Our research was supported by the OTKA PD83610, by the MTA-CNRS cooperation (NKM-96/2014), by the Bolyai Scholarship, and by the 'Lendület' program of the HAS (LP2012-27/2012). The 10Be measurements were performed at the ASTER AMS national facility (CEREGE, Aix en Provence, France).

  10. Recalibration of the M {sub BH}– σ {sub ⋆} Relation for AGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batiste, Merida; Bentz, Misty C.; Raimundo, Sandra I.

    2017-03-20

    We present a recalibration of the M {sub BH}– σ {sub ⋆} relation, based on a sample of 16 reverberation-mapped galaxies with newly determined bulge stellar velocity dispersions ( σ {sub ⋆}) from integral-field spectroscopy (IFS), and a sample of 32 quiescent galaxies with publicly available IFS. For both samples, σ {sub ⋆} is determined via two different methods that are popular in the literature, and we provide fits for each sample based on both sets of σ {sub ⋆}. We find the fit to the active galactic nucleus sample is shallower than the fit to the quiescent galaxy sample,more » and that the slopes for each sample are in agreement with previous investigations. However, the intercepts to the quiescent galaxy relations are notably higher than those found in previous studies, due to the systematically lower σ {sub ⋆} measurements that we obtain from IFS. We find that this may be driven, in part, by poorly constrained measurements of bulge effective radius ( r{sub e}) for the quiescent galaxy sample, which may bias the σ {sub ⋆} measurements low. We use these quiescent galaxy parameterizations, as well as one from the literature, to recalculate the virial scaling factor f . We assess the potential biases in each measurement, and suggest f = 4.82 ± 1.67 as the best currently available estimate. However, we caution that the details of how σ {sub ⋆} is measured can significantly affect f , and there is still much room for improvement.« less

  11. SU-G-JeP2-05: Dose Effects of a 1.5T Magnetic Field On Air-Tissue and Lung-Tissue Interfaces in MRI-Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xinfeng; Prior, Phillip; Chen, Guangpei

    Purpose: The purpose of the study is to investigate the dose effects of electron-return-effect (ERE) at air-tissue and lung-tissue interfaces under a 1.5T transverse-magnetic-field (TMF). Methods: IMRT and VMAT plans for representative pancreas, lung, breast and head & neck (H&N) cases were generated following clinical dose volume (DV) criteria. The air-cavity walls, as well as the lung wall, were delineated to examine the ERE. In each case, the original plan generated without TMF is compared with the reconstructed plan (generated by recalculating the original plan with the presence of TMF) and the optimized plan (generated by a full optimization withmore » TMF), using a variety of DV parameters, including V100%, D95% and dose heterogeneity index for PTV, Dmax, and D1cc for OARs (organs at risk) and tissue interface. Results: The dose recalculation under TMF showed the presence of the 1.5 T TMF can slightly reduce V100% and D95% for PTV, with the differences being less than 4% for all but lung case studied. The TMF results in considerable increases in Dmax and D1cc on the skin in all cases, mostly between 10-35%. The changes in Dmax and D1cc on air cavity walls are dependent upon site, geometry, and size, with changes ranging up to 15%. In general, the VMAT plans lead to much smaller dose effects from ERE compared to fixed-beam IMRT. When the TMF is considered in the plan optimization, the dose effects of the TMF at tissue interfaces are significantly reduced in most cases. Conclusion: The doses on tissue interfaces can be significantly changed by the presence of a 1.5T TMF during MR-guided RT when the TMF is not included in plan optimization. These changes can be substantially reduced or even removed during VMAT/IMRT optimization that specifically considers the TMF, without deteriorating overall plan quality.« less

  12. Recalculating the Economic Cost of Suicide

    ERIC Educational Resources Information Center

    Yang, Bijou; Lester, David

    2007-01-01

    These authors argue that estimates of the net economic cost of suicide should go beyond accounting for direct medical costs and indirect costs from loss of earnings by those who commit suicide. There are potential savings from (a) not having to treat the depressive and other psychiatric disorders of those who kill themselves; (b) avoidance of…

  13. 78 FR 20509 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ... Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of proposed rulemaking...) RB211-535E4-B-37 series turbofan engines. This proposed AD was prompted by recalculating the life of.... (c) Applicability This AD applies to Rolls-Royce plc (RR) RB211-535E4-B-37 series turbofan engines...

  14. 40 CFR 65.67 - Reporting provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... submit a report included as part of the next periodic report. The report shall include the following... operator shall include a statement in the next periodic report after the process change that a process... § 65.63(f), and the recalculated value is less than the applicable value in table 1 to this subpart; or...

  15. 40 CFR 65.67 - Reporting provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... submit a report included as part of the next periodic report. The report shall include the following... operator shall include a statement in the next periodic report after the process change that a process... § 65.63(f), and the recalculated value is less than the applicable value in table 1 to this subpart; or...

  16. 40 CFR 65.67 - Reporting provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... submit a report included as part of the next periodic report. The report shall include the following... operator shall include a statement in the next periodic report after the process change that a process... § 65.63(f), and the recalculated value is less than the applicable value in table 1 to this subpart; or...

  17. 40 CFR 65.67 - Reporting provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... submit a report included as part of the next periodic report. The report shall include the following... operator shall include a statement in the next periodic report after the process change that a process... § 65.63(f), and the recalculated value is less than the applicable value in table 1 to this subpart; or...

  18. 75 FR 7616 - Mitigation of Carrier Fines for Transporting Aliens Without Proper Documents; Modification of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ... Transporting Aliens Without Proper Documents; Modification of Memorandum of Understanding and Recalculation of... States an alien who does not have a valid passport and an unexpired visa, as required under applicable law, is subject to a fine for each alien transported lacking the required documentation. Pursuant to...

  19. Launching a Projectile into Deep Space

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2004-01-01

    As part of the discussion about Newton's work in a history of mathematics course, one of the presentations calculated the amount of energy necessary to send a projectile into deep space. Afterwards, the students asked for a recalculation with two changes: First the launch under study consisted of a single stage, but the students desired to…

  20. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  1. Unified Generic Geometric-Decompositions for Consensus or Flocking Systems of Cooperative Agents and Fast Recalculations of Decomposed Subsystems Under Topology-Adjustments.

    PubMed

    Li, Wei

    2016-06-01

    This paper considers a unified geometric projection approach for: 1) decomposing a general system of cooperative agents coupled via Laplacian matrices or stochastic matrices and 2) deriving a centroid-subsystem and many shape-subsystems, where each shape-subsystem has the distinct properties (e.g., preservation of formation and stability of the original system, sufficiently simple structures and explicit formation evolution of agents, and decoupling from the centroid-subsystem) which will facilitate subsequent analyses. Particularly, this paper provides an additional merit of the approach: considering adjustments of coupling topologies of agents which frequently occur in system design (e.g., to add or remove an edge, to move an edge to a new place, and to change the weight of an edge), the corresponding new shape-subsystems can be derived by a few simple computations merely from the old shape-subsystems and without referring to the original system, which will provide further convenience for analysis and flexibility of choice. Finally, such fast recalculations of new subsystems under topology adjustments are provided with examples.

  2. Recalculation with SEACAB of the activation by spent fuel neutrons and residual dose originated in the racks replaced at Cofrentes NPP

    NASA Astrophysics Data System (ADS)

    Ortego, Pedro; Rodriguez, Alain; Töre, Candan; Compadre, José Luis de Diego; Quesada, Baltasar Rodriguez; Moreno, Raul Orive

    2017-09-01

    In order to increase the storage capacity of the East Spent Fuel Pool at the Cofrentes NPP, located in Valencia province, Spain, the existing storage stainless steel racks were replaced by a new design of compact borated stainless steel racks allowing a 65% increase in fuel storing capacity. Calculation of the activation of the used racks was successfully performed with the use of MCNP4B code. Additionally the dose rate at contact with a row of racks in standing position and behind a wall of shielding material has been calculated using MCNP4B code as well. These results allowed a preliminary definition of the burnker required for the storage of racks. Recently the activity in the racks has been recalculated with SEACAB system which combines the mesh tally of MCNP codes with the activation code ACAB, applying the rigorous two-step method (R2S) developed at home, benchmarked with FNG irradiation experiments and usually applied in fusion calculations for ITER project.

  3. Algorithms to qualify respiratory data collected during the transport of trauma patients.

    PubMed

    Chen, Liangyou; McKenna, Thomas; Reisner, Andrew; Reifman, Jaques

    2006-09-01

    We developed a quality indexing system to numerically qualify respiratory data collected by vital-sign monitors in order to support reliable post-hoc mining of respiratory data. Each monitor-provided (reference) respiratory rate (RR(R)) is evaluated, second-by-second, to quantify the reliability of the rate with a quality index (QI(R)). The quality index is calculated from: (1) a breath identification algorithm that identifies breaths of 'typical' sizes and recalculates the respiratory rate (RR(C)); (2) an evaluation of the respiratory waveform quality (QI(W)) by assessing waveform ambiguities as they impact the calculation of respiratory rates and (3) decision rules that assign a QI(R) based on RR(R), RR(C) and QI(W). RR(C), QI(W) and QI(R) were compared to rates and quality indices independently determined by human experts, with the human measures used as the 'gold standard', for 163 randomly chosen 15 s respiratory waveform samples from our database. The RR(C) more closely matches the rates determined by human evaluation of the waveforms than does the RR(R) (difference of 3.2 +/- 4.6 breaths min(-1) versus 14.3 +/- 19.3 breaths min(-1), mean +/- STD, p < 0.05). Higher QI(W) is found to be associated with smaller differences between calculated and human-evaluated rates (average differences of 1.7 and 8.1 breaths min(-1) for the best and worst QI(W), respectively). Establishment of QI(W) and QI(R), which ranges from 0 for the worst-quality data to 3 for the best, provides a succinct quantitative measure that allows for automatic and systematic selection of respiratory waveforms and rates based on their data quality.

  4. Cenozoic planktonic marine diatom diversity and correlation to climate change

    USGS Publications Warehouse

    Lazarus, David; Barron, John; Renaudie, Johan; Diver, Patrick; Türke, Andreas

    2014-01-01

    Marine planktonic diatoms export carbon to the deep ocean, playing a key role in the global carbon cycle. Although commonly thought to have diversified over the Cenozoic as global oceans cooled, only two conflicting quantitative reconstructions exist, both from the Neptune deep-sea microfossil occurrences database. Total diversity shows Cenozoic increase but is sample size biased; conventional subsampling shows little net change. We calculate diversity from a separately compiled new diatom species range catalog, and recalculate Neptune subsampled-in-bin diversity using new methods to correct for increasing Cenozoic geographic endemism and decreasing Cenozoic evenness. We find coherent, substantial Cenozoic diversification in both datasets. Many living cold water species, including species important for export productivity, originate only in the latest Miocene or younger. We make a first quantitative comparison of diatom diversity to the global Cenozoic benthic ∂18O (climate) and carbon cycle records (∂13C, and 20-0 Ma pCO2). Warmer climates are strongly correlated with lower diatom diversity (raw: rho = .92, p2 were only moderately higher than today. Diversity is strongly correlated to both ∂13C and pCO2 over the last 15 my (for both: r>.9, detrended r>.6, all p<.001), but only weakly over the earlier Cenozoic, suggesting increasingly strong linkage of diatom and climate evolution in the Neogene. Our results suggest that many living marine planktonic diatom species may be at risk of extinction in future warm oceans, with an unknown but potentially substantial negative impact on the ocean biologic pump and oceanic carbon sequestration. We cannot however extrapolate our my-scale correlations with generic climate proxies to anthropogenic time-scales of warming without additional species-specific information on proximate ecologic controls.

  5. Evaluation of a risk-based environmental hot spot delineation algorithm.

    PubMed

    Sinha, Parikhit; Lambert, Michael B; Schew, William A

    2007-10-22

    Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casey, K; Wong, P; Tung, S

    Purpose: To quantify the dosimetric impact of interfractional shoulder motion on targets in the low neck for head and neck patients treated with volume modulated arc therapy (VMAT). Methods: Three patients with head and neck cancer were selected. All three required treatment to nodal regions in the low neck in addition to the primary tumor. The patients were immobilized during simulation and treatment with a custom thermoplastic mask covering the head and shoulders. One VMAT plan was created for each patient utilizing two full 360° arcs. A second plan was created consisting of two superior VMAT arcs matched to anmore » inferior static AP supraclavicular field. A CT-on-rails alignment verification was performed weekly during each patient's treatment course. The weekly CT images were registered to the simulation CT and the target contours were deformed and applied to the weekly CT. The two VMAT plans were copied to the weekly CT datasets and recalculated to obtain the dose to the low neck contours. Results: The average observed shoulder position shift in any single dimension relative to simulation was 2.5 mm. The maximum shoulder shift observed in a single dimension was 25.7 mm. Low neck target mean doses, normalized to simulation and averaged across all weekly recalculations were 0.996, 0.991, and 1.033 (Full VMAT plan) and 0.986, 0.995, and 0.990 (Half-Beam VMAT plan) for the three patients, respectively. The maximum observed deviation in target mean dose for any individual weekly recalculation was 6.5%, occurring with the Full VMAT plan for Patient 3. Conclusion: Interfractional variation in dose to low neck nodal regions was quantified for three head and neck patients treated with VMAT. Mean dose was 3.3% higher than planned for one patient using a Full VMAT plan. A Half-Beam technique is likely a safer choice when treating the supraclavicular region with VMAT.« less

  7. SU-E-T-479: IMRT Plan Recalculation in Patient Based On Dynalog Data and the Effect of a Single Failing MLC Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcos, M; Mitrou, E

    2015-06-15

    Purpose: Using Linac dynamic logs (Dynalogs) we evaluate the impact of a single failing MLC motor on the deliverability of an IMRT plan by assessing the recalculated dose volume histograms (DVHs) taking the delivered MLC positions and beam hold-offs into consideration. Methods: This is a retrospective study based on a deteriorating MLC motor (leaf 36B) which was observed to be failing via Dynalog analysis. To investigate further, Eclipse-importable MLC files were generated from Dynalogs to recalculate the actual delivered dose and to assess the clinical impact through DVHs. All deliveries were performed on a Varian 21EX linear accelerator equipped withmore » Millennium-120 MLC. The analysis of Dynalog files and subsequent conversion to Eclipse-importable MLC files were all performed by in-house programming in Python. Effects on plan DVH are presented in the following section on a particular brain-IMRT plan which was delivered with a failing MLC motor which was then replaced. Results: Global max dose increased by 13.5%, max dose to the brainstem PRV increased by 8.2%, max dose to the optic chiasm increased by 7.6%, max dose to optic nerve increased by 8.8% and the mean dose to the PTV increased by 7.9% when comparing the original plan to the fraction with the failing MLC motor. The reason the dose increased was due to the failure being on the B-bank which is the lagging side on a sliding window delivery, therefore any failures on this side will cause an over-irradiation as the B-bank leaves struggles to keep the window from growing. Conclusion: Our findings suggest that a single failing MLC motor may jeopardize the entire delivery. This may be due to the bad MLC motor drawing too much current causing all MLCs on the same bank to underperform. This hypothesis will be investigated in a future study.« less

  8. Should early amputation impact initial fluid therapy algorithms in burns resuscitation? A retrospective analysis using 3D modelling.

    PubMed

    Staruch, Robert M T; Beverly, A; Lewis, D; Wilson, Y; Martin, N

    2017-02-01

    While the epidemiology of amputations in patients with burns has been investigated previously, the effect of an amputation on burn size and its impact on fluid management have not been considered in the literature. Fluid resuscitation volumes are based on the percentage of the total body surface area (%TBSA) burned calculated during the primary survey. There is currently no consensus as to whether the fluid volumes should be recalculated after an amputation to compensate for the new body surface area. The aim of this study was to model the impact of an amputation on burn size and predicted fluid requirement. A retrospective search was performed of the database at the Queen Elizabeth Hospital Birmingham Regional Burns Centre to identify all patients who had required an early amputation as a result of their burn injury. The search identified 10 patients over a 3-year period. Burn injuries were then mapped using 3D modelling software. BurnCase3D is a computer program that allows accurate plotting of burn injuries on a digital mannequin adjusted for height and weight. Theoretical fluid requirements were then calculated using the Parkland formula for the first 24 h, and Herndon formula for the second 24 h, taking into consideration the effects of the amputation on residual burn size. This study demonstrated that amputation can have an unpredictable effect on burn size that results in a significant deviation from predicted fluid resuscitation volumes. This discrepancy in fluid estimation may cause iatrogenic complications due to over-resuscitation in burn-injured casualties. Combining a more accurate estimation of postamputation burn size with goal-directed fluid therapy during the resuscitation phase should enable burn care teams to optimise patient outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. 40 CFR 63.7540 - How do I demonstrate continuous compliance with the emission limitations, fuel specifications and...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in lower fuel input of chlorine and mercury than the maximum values calculated during the last... chlorine concentration for any new fuel type in units of pounds per million Btu, based on supplier data or... content of chlorine. (iii) Recalculate the hydrogen chloride emission rate from your boiler or process...

  10. 75 FR 5043 - Interim Procedure for Patentees To Request a Recalculation of the Patent Term Adjustment To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-01

    ... under 37 CFR 1.705 in accordance with the Wyeth decision. This notice also provides information concerning the Patent Application Information Retrieval (PAIR) screen that displays the patent term... Wyeth is filed within 180 days of the day the patent was granted. FOR FURTHER INFORMATION CONTACT: The...

  11. 40 CFR 1045.730 - What ABT reports must I send to EPA?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... volumes for the model year with a point of retail sale in the United States, as described in § 1045.701(j...) Show that your net balance of emission credits from all your participating families in each averaging... errors mistakenly decreased your balance of emission credits, you may correct the errors and recalculate...

  12. 40 CFR 65.63 - Performance and group status change requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... material or TOC by at least 98 weight-percent or to a concentration of less than 20 parts per million by... recalculate the TRE index value, flow, or TOC or organic hazardous air pollutant (HAP) concentration according.... Engineering assessments shall meet the specifications in § 65.64(i). (2) Concentration. The TOC or organic HAP...

  13. 75 FR 31422 - Certain New Pneumatic Off-The-Road Tires from the People's Republic of China: Notice of Decision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... materials certain inputs used by Xugong in the production of subject merchandise. In April 2009, the... the fifteen raw materials reported by Xugong as indirect materials. On August 4, 2009, the CIT... indirect material, to reopen the record as appropriate, and to recalculate the margin accordingly. See...

  14. 78 FR 34337 - Stainless Steel Bar From India: Final Results of Antidumping Duty Administrative Review; 2011-2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... Preliminary Results as non-affiliated and recalculated Ambica's net financial expense ratio, excluding the... Disclosure Pursuant to 19 CFR 351.224(b), we intend to disclose calculation memoranda used in our analysis to... description is dispositive. Analysis of Comments Received All issues raised in the case briefs are addressed...

  15. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  16. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  17. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  18. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  19. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  20. The Big Picture in Bilingual Education: A Meta-Analysis Corrected for Gersten's Coding Error

    ERIC Educational Resources Information Center

    Rolstad, Kellie; Mahoney, Kate; Glass, Gene V.

    2008-01-01

    In light of a recent revelation that Gersten (1985) included erroneous information on one of two programs for English Language Learners (ELLs), the authors re-calculate results of their earlier meta-analysis of program effectiveness studies for ELLs in which Gersten's studies had behaved as outliers (Rolstad, Mahoney & Glass, 2005). The correction…

  1. 75 FR 45097 - Certain Magnesia Carbon Bricks from Mexico: Notice of Final Determination of Sales at Less Than...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-02

    ... determination of sales at LTFV in the antidumping duty investigation of certain magnesia carbon bricks from... respondent in this investigation, RHI- Refmex S.A. de C.V. (Refmex) in which the Department applied a quarterly costing methodology to recalculate the cost of production (COP). See Memorandum entitled ``Cost of...

  2. STELLAR VELOCITY DISPERSION MEASUREMENTS IN HIGH-LUMINOSITY QUASAR HOSTS AND IMPLICATIONS FOR THE AGN BLACK HOLE MASS SCALE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grier, C. J.; Martini, P.; Peterson, B. M.

    We present new stellar velocity dispersion measurements for four luminous quasars with the Near-Infrared Integral Field Spectrometer instrument and the ALTAIR laser guide star adaptive optics system on the Gemini North 8 m telescope. Stellar velocity dispersion measurements and measurements of the supermassive black hole (BH) masses in luminous quasars are necessary to investigate the coevolution of BHs and galaxies, trace the details of accretion, and probe the nature of feedback. We find that higher-luminosity quasars with higher-mass BHs are not offset with respect to the M{sub BH}-{sigma}{sub *} relation exhibited by lower-luminosity active galactic nuclei (AGNs) with lower-mass BHs,more » nor do we see correlations with galaxy morphology. As part of this analysis, we have recalculated the virial products for the entire sample of reverberation-mapped AGNs and used these data to redetermine the mean virial factor (f) that places the reverberation data on the quiescent M{sub BH}-{sigma}{sub *} relation. With our updated measurements and new additions to the AGN sample, we obtain (f) = 4.31 {+-} 1.05, which is slightly lower than, but consistent with, most previous determinations.« less

  3. P-T Path and Nd-isotopes of Garnet Pyroxenite Xenoliths From Salt Lake Crater, Oahu

    NASA Astrophysics Data System (ADS)

    Ichitsubo, N.; Takahashi, E.; Clague, D. A.

    2001-12-01

    Abundant garnet pyroxenite and spinel lherzolite xenoliths are found in Salt Lake Crater (SLC) in Oahu, Hawaii [Jackson and Wright, 1970]. The SLC pyroxenite suite xenoliths (olivine-poor type) have complex exsolution textures that were probably formed during a slow cooling. In this study, we used digital image software to obtain modal data of exsolved phases in the host pyroxene using backscattered electron images (BEIs). The abundances of the exsolved phases were multiplied by the phase compositions determined by electron probe micro-analyzer (EPMA) to reconstruct pyroxene compositions prior to exsolution. In order to evaluate the error in this calculation, we recalculated the reconstructed pyroxene compositions using the different pyroxene pairs. Reconstructed clinopyroxenes in each sample have almost no variations (MgO, CaO +/-1wt %, FeO +/-0.5wt % and the other oxides ~+/-0.1wt %). Reconstructed orthopyroxenes are more variable in MgO, CaO (+/-2wt %) and FeO (+/-1wt %) than reconstructed clinopyroxenes, but the other oxides have only limited variations ( ~+/-0.5wt %). These compositions were used to calculate igneous stage (magmatic) P-T conditions based on the geothermometers and geobarometers of Wells [1977] and Brey and Kohler [1990] Following assumptions are made: (1) the reconstructed pyroxene compositions are the final record in the primary igneous stage, and (2) cores of the largest garnet grains in each sample record the primary igneous stage composition.. The recalculation using the different pairs of reconstructed pyroxenes show the uncertainty to be +/- 30° C and 0.1 GPa. These appear to be small compared to the large intrinsic errors of geothermometer and geobarometers (+/-20° -35° C and +/- 0.3-0.5 GPa). Estimated P-T conditions for garnet pyroxenites are 1.5-2.2 GPa, 1000° -1100° C in the final reequilibration stage and 2.2-2.6 GPa (at maximum), 1150° -1300° C (at minimum) in the igneous stage. The all samples show ca. 200° C cooling and 0.5 GPa decompression. This implies that the garnet pyroxenites cooled ca. 200° C to develop the observed complex exsolution and may have risen from about 70-80 km to 50-65 km depth. Glass pockets and fine minerals (olivine, pyroxene, spinel) occur in the SLC garnet pyroxenite xenoliths. Amphibole and phlogopite, which may have crystallized by metasomatism, are common accessory minerals in them. In order to study the nature of metasomatism as revealed by the glass pockets and fine aggregate of spinel and pyroxene, Nd-isotope study on the SLC xenoliths is under way.

  4. US forest carbon calculation tool: forest-land carbon stocks and net annual stock change

    Treesearch

    James E. Smith; Linda S. Heath; Michael C. Nichols

    2007-01-01

    The Carbon Calculation Tool 4.0, CCTv40.exe, is a computer application that reads publicly available forest inventory data collected by the U.S. Forest Service's Forest Inventory and Analysis Program (FIA) and generates state-level annualized estimates of carbon stocks on forest land based on FORCARB2 estimators. Estimates can be recalculated as...

  5. 40 CFR Appendix R to Part 50 - Interpretation of the National Ambient Air Quality Standards for Lead

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... determine the design value. (B) The “below NAAQS level” test is as follows: Data substitution will be... the recalculated (“test”) result including the high values, shall be used to determine the design... (local standard time), that are used in NAAQS computations. Design value is the site-level metric (i.e...

  6. 40 CFR Appendix R to Part 50 - Interpretation of the National Ambient Air Quality Standards for Lead

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... determine the design value. (B) The “below NAAQS level” test is as follows: Data substitution will be... the recalculated (“test”) result including the high values, shall be used to determine the design... (local standard time), that are used in NAAQS computations. Design value is the site-level metric (i.e...

  7. 40 CFR Appendix R to Part 50 - Interpretation of the National Ambient Air Quality Standards for Lead

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... determine the design value. (B) The “below NAAQS level” test is as follows: Data substitution will be... the recalculated (“test”) result including the high values, shall be used to determine the design... (local standard time), that are used in NAAQS computations. Design value is the site-level metric (i.e...

  8. 40 CFR Appendix R to Part 50 - Interpretation of the National Ambient Air Quality Standards for Lead

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determine the design value. (B) The “below NAAQS level” test is as follows: Data substitution will be... the recalculated (“test”) result including the high values, shall be used to determine the design... (local standard time), that are used in NAAQS computations. Design value is the site-level metric (i.e...

  9. 40 CFR Appendix R to Part 50 - Interpretation of the National Ambient Air Quality Standards for Lead

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determine the design value. (B) The “below NAAQS level” test is as follows: Data substitution will be... the recalculated (“test”) result including the high values, shall be used to determine the design... (local standard time), that are used in NAAQS computations. Design value is the site-level metric (i.e...

  10. 75 FR 15412 - Silicon Metal From the People's Republic of China: Notice of Amended Final Results of New Shipper...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... reviews to apply the recalculated surrogate value for the by-product silica fume in the Department's... specifically to the by-product silica fume. See Remand Order at 14. \\1\\ Respondents referenced here are (1... Redetermination of the Silica Fume By-Product Valuation, Remand for Antidumping Duty New Shipper Review of Silicon...

  11. Dose estimates for the 1104 m APS storage ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moe, H.J.

    1989-06-01

    The estimated dose equivalent rates outside the shielded storage ring, and the estimated annual dose equivalent to members of the public due to direct radiation and skyshine from the ring, have been recalculated. The previous estimates found in LS-84 (MOE 87) and cited in the 1987 Conceptual Design Report of the APS (ANL 87) required revision because of changes in the ring circumference and in the proposed location of the ring with respect to the nearest site boundary. The values assumed for the neutron quality factors were also overestimated (by a factor of 2) in the previous computation, and themore » correct values have been used for this estimate. The methodology used to compute dose and dose rate from the storage ring is the same as that used in LS-90 (MOE 87a). The calculations assumed 80 cm thick walls of ordinary concrete (or the shielding equivalent of this) and a roof thickness of 1 meter of ordinary concrete. The circumference of the ring was increased to 1,104 m, and the closest distance to the boundary was taken as 140 m. The recalculation of the skyshine component used the same methodology as that used in LS-84.« less

  12. The First Find of Mannardite in Russia

    NASA Astrophysics Data System (ADS)

    Reznitsky, L. Z.; Sklyarov, E. V.; Ushchapovskaya, Z. F.; Barash, I. G.

    2018-03-01

    Mannardite was found in a type of Cr-V-bearing metamorphic rock of the Slyudyanka complex (South Baikal region). The X-ray data of the mineral are recalculated for three scenarios taking into account possible variations of the mannardite structure. The mean chemical composition is as follows (14 analyses, wt %): 0.11 SiO2, 52.08 TiO2, 6.19 VO2, 13.51 V2O3, 5.50 Cr2O3, 0.24 Al2O3, 0.16 Fe2O3, 0.05 MgO, 20.09 BaO, 2.09 H2O (the H2O, VO2, and V2O3 contents are recalculated). The formula of the mean composition is (Ba1.06H2O0.94)(Ti5.27Si0.21V0.61 4+V1.45 3+Cr0.59Fe0.02Mg0.01)O16. Mannardite is characterized by the presence of different valent V. The mineral can be hydrous with molecular H2O or hydroxylion in tunnels or anhydrous. Mannardite can be considered an indicator of the hydroxyl or oxygen regime of petrogenetic processes.

  13. Semirational rogue waves for the three-coupled fourth-order nonlinear Schrödinger equations in an alpha helical protein

    NASA Astrophysics Data System (ADS)

    Du, Zhong; Tian, Bo; Qu, Qi-Xing; Chai, Han-Peng; Wu, Xiao-Yu

    2017-12-01

    Investigated in this paper are the three-coupled fourth-order nonlinear Schrödinger equations, which describe the dynamics of alpha helical protein with the interspine coupling at the higher order. We show that the representation of the Lax pair with Expressions (42) -(45) in Ref. [25] is not correct, because the three-coupled fourth-order nonlinear Schrödinger equations can not be reproduced by the Lax pair with Expressions (42) -(45) in Ref. [25] through the compatibility condition. Therefore, we recalculate the Lax pair. Based on the recalculated Lax pair, we construct the generalized Darboux transformation, and derive the first- and second-order semirational solutions. Through such solutions, dark-bright-bright soliton, breather-breather-bright soliton, breather soliton and rogue waves are analyzed. It is found that the rogue waves in the three components are mutually proportional. Moreover, three types of the semirational rogue waves consisting of the rogue waves and solitons are presented: (1) consisting of the first-order rogue wave and one soliton; (2) consisting of the first-order rogue wave and two solitons; (3) consisting of the second-order rogue wave and two solitons.

  14. Recalculated Areas for Maximum Ice Extents of the Baltic Sea During Winters 1971-2008

    NASA Astrophysics Data System (ADS)

    Niskanen, T.; Vainio, J.; Eriksson, P.; Heiler, I.

    2009-04-01

    Publication of operational ice charts in Finland was started from the Baltic Sea in a year 1915. Until year 1993 all ice charts were hand drawn paper copies but in the year 1993 ice charting software IceMap was introduced. Since then all ice charts were produced digitally. Since the year 1996 IceMap has had an option that user can calculate areas of single ice area polygons in the chart. Using this option the area of the maximum ice extent can be easily solved fully automatically. Before this option was introduced (and in full operation) all maximum extent areas were calculated manually by a planimeter. During recent years it has become clear that some areas calculated before 1996 don't give the same result as IceMap. Differences can come from for example inaccuracy of old coastlines, map projections, the calibration of the planimeter or interpretation of old ice area symbols. Old ice charts since winter 1970-71 have now been scanned, rectified and re-drawn. New maximum ice extent areas for Baltic Sea have now been re-calculated. By these new technological tools it can be concluded that in some cases clear differences can be found.

  15. SU-E-T-397: Evaluation of Planned Dose Distributions by Monte Carlo (0.5%) and Ray Tracing Algorithm for the Spinal Tumors with CyberKnife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, H; Brindle, J; Hepel, J

    2015-06-15

    Purpose: To analyze and evaluate dose distribution between Ray Tracing (RT) and Monte Carlo (MC) algorithms of 0.5% uncertainty on a critical structure of spinal cord and gross target volume and planning target volume. Methods: Twenty four spinal tumor patients were treated with stereotactic body radiotherapy (SBRT) by CyberKnife in 2013 and 2014. The MC algorithm with 0.5% of uncertainty is used to recalculate the dose distribution for the treatment plan of the patients using the same beams, beam directions, and monitor units (MUs). Results: The prescription doses are uniformly larger for MC plans than RT except one case. Upmore » to a factor of 1.19 for 0.25cc threshold volume and 1.14 for 1.2cc threshold volume of dose differences are observed for the spinal cord. Conclusion: The MC recalculated dose distributions are larger than the original MC calculations for the spinal tumor cases. Based on the accuracy of the MC calculations, more radiation dose might be delivered to the tumor targets and spinal cords with the increase prescription dose.« less

  16. Activity measurements of the radionuclide 124Sb by the LNE-LNHB, France for the ongoing comparison BIPM.RI(II)-K1.Sb-124

    NASA Astrophysics Data System (ADS)

    Michotte, C.; Ratel, G.; Moune, M.; Bobin, C.

    2011-01-01

    In 2007, the Laboratoire national de métrologie et d'essais-Laboratoire national Henri Becquerel (LNE-LNHB), France submitted a sample of known activity of 124Sb to the International Reference System (SIR) for activity comparison at the Bureau International des Poids et Mesures (BIPM). The activity was about 5.3 MBq. The key comparison reference value (KCRV) has been recalculated to include this new value and the degrees of equivalence between each equivalent activity for the three participants measured in the SIR and the KCRV are presented in a table and graphically. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI Section II, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  17. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  18. In-vitro terahertz spectroscopy of rat skin under the action of dehydrating agents

    NASA Astrophysics Data System (ADS)

    Kolesnikov, Aleksandr S.; Kolesnikova, Ekaterina A.; Tuchina, Daria K.; Terentyuk, Artem G.; Nazarov, Maxim; Skaptsov, Alexander A.; Shkurinov, Alexander P.; Tuchin, Valery V.

    2014-01-01

    In the paper we present the results of study of rat skin and rat subcutaneous tumor under the action of dehydrating agents in terahertz (THz) range (15-30 THz). Frustrated Total Internal Reflection (FTIR) spectra were obtained with infrared Fourier spectrometer Nicolet 6700 and then they were recalculated in the transmittance spectra with Omnic software. Experiments were carried out with healthy and xenografted tumor in skin tissue in vitro. As the dehydrating agents 100% glycerol, 40%-water glucose solution, PEG-600, and propylene glycol were used. To determine the effect of the optical clearing agent (OCA), the alterations of terahertz transmittance for the samples were analyzed. The results have shown that PEG-600 and 40%-glucose water solution are the most effective dehydrating agent. The transmittance of healthy skin after PEG-600 application increased approximately by 6% and the transmittance of tumor tissue after PEG- 600 and 40%-glucose water solution application increased approximately by 8%. Obtained data can be useful for further application of terahertz radiation for tumor diagnostics.

  19. Quantitative studies on the mating system of jute (Corchorus olitorius L.).

    PubMed

    Basak, S L; Gupta, S

    1972-01-01

    More than 100,000 individuals of C. olitorius were scored for selfing versus outcrossing in various populations, at several locations, over a number of years and seasons. Different marker loci, such as A (d) /a (0), Sh/sh, Cr/cr and Pl/pl, were used to determine the male gametes which had effected fertilization. The results showed that the frequency of outcrossing was extremely variable among loci, crosses and samples within a single locus. The outcrossing parameter, α, was found to differ with years, locations and seasons within years. It was also found that outcrossing, in general, was nonrandom. Nonrandomness was also independent of flowering dates. The amount of outcrossing was directly associated with the frequency of F 2 plants flowering at different dates. A recalculated outcrossing parameter from different authors' reported data, representing different years and locations, has been found to be nonrandom. It was observed that the propensity to outcross was not a simple function of changing gene frequency but was associated with the genotype of individual selected.

  20. Audible thunder characteristic and the relation between peak frequency and lightning parameters

    NASA Astrophysics Data System (ADS)

    Yuhua, Ouyang; Ping, Yuan

    2012-02-01

    In recent summers, some natural lightning optical spectra and audible thunder signals were observed. Twelve events on 15 August 2008 are selected as samples since some synchronizing information about them are obtained, such as lightning optical spectra, surface E-field changes, etc. By using digital filter and Fourier transform, thunder frequency spectra in observation location have been calculated. Then the two main propagation effects, finite amplitude propagation and attenuation by air, are calculated. Upon that we take the test thunder frequency spectra and work backward to recalculate the original frequency spectra near generation location. Thunder frequency spectra and the frequency distribution varying with distance are researched. According to the theories on plasma, the channel temperature and electron density are further calculated by transition parameters of lines in lightning optical spectra. Pressure and the average ionization degree of each discharge channel are obtained by using Saha equations, charge conservation equations and particle conservation equations. Moreover, the relationship between the peak frequency of each thunder and channel parameters of the lightning is studied.

  1. An investigation of the impact of variations of DVH calculation algorithms on DVH dependant radiation therapy plan evaluation metrics

    NASA Astrophysics Data System (ADS)

    Kennedy, A. M.; Lane, J.; Ebert, M. A.

    2014-03-01

    Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.

  2. The 5th July 1930 earthquake at Montilla (S Spain). Use of regionally recorded smoked paper seismograms

    NASA Astrophysics Data System (ADS)

    Batlló, J.; Stich, D.; Macià, R.; Morales, J.

    2009-04-01

    On the night of 5th July 1930 a damaging earthquake struck the town of Montilla (near Córdoba, S-Spain) and its surroundings. Magnitude estimation for this earthquake is M=5, and its epicentral intensity has been evaluated as VIII (MSK). Even it is an earthquake of moderate size, it is the largest one in-strumentally recorded in this region. This makes this event of interest for a better definition of the regional seismicity. For this reason we decided to study a new its source from the analysis of the available contemporary seismograms and related documents. A total of 25 seismograms from 11 seismic stations have been collected and digitized. Processing of some of the records has been difficult because they were obtained from microfilm or contemporary reproductions on journals. Most of them are on smoked paper and recorded at regional distances. This poses a good opportunity to test the limits of the use of such low frequency - low dynamics recorded seismograms for the study of regional events. Results are promising: Using such regional seismograms the event has been relocated, its magnitude recalculated (Mw 5.1) and inversion of waveforms to elucidate its focal mechanism has been performed. We present the results of this research and its consequences for the regional seismicity and we compare them with present smaller earthquakes occurred in the same place and with the results obtained for earthquakes of similar size occurred more to the East on 1951.

  3. Detonation equation of state at LLNL, 1995. Revision 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P.C.; Wu, B.; Haselman, L.C. Jr.

    1996-02-01

    JWL`s and 1-D Look-up tables are shown to work for ``one-track`` experiments like cylinder shots and the expanding sphere. They fail for ``many-track`` experiments like the compressed sphere. As long as the one-track experiment has dimensions larger than the explosive`s reaction zone and the explosive is near-ideal, a general JWL with R{sub 1} = 4.5 and R{sub 2} = 1.5 can be constructed, with both {omega} and E{sub o} being calculated from thermochemical codes. These general JWL`s allow comparison between various explosives plus recalculation of the JWL for different densities. The Bigplate experiment complements the cylinder test by providing continuousmore » oblique angles of shock incidence from 0{degrees} to 70{degrees}. Explosive reaction zone lengths are determined from metal plate thicknesses, extrapolated run-to-detonation distances, radius size effects and detonation front curvature. Simple theories of the cylinder test, Bigplate, the cylinder size effect and detonation front curvature are given. The detonation front lag at the cylinder edge is shown to be proportional to the half-power of the reaction zone length. By calibrating for wall blow-out, a full set of reaction zone lengths from PETN to ANFO are obtained. The 1800--2100 K freezing effect is shown to be caused by rapid cooling of the product gases. Compiled comparative data for about 80 explosives is listed. Ten Chapters plus an Appendix.« less

  4. 77 FR 57085 - Mobility Fund Phase I Auction; Release of Files with Recalculated Road Miles for Auction 901...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-17

    ... FEDERAL COMMUNICATIONS COMMISSION [AU Docket No. 12-25; DA 12-1446] Mobility Fund Phase I Auction... Mobility Fund Phase I support to be offered in Auction 901, which is to be held on September 27, 2012, and the change of the mock auction date from September 25, 2012 to September 21, 2012. DATES: The mock...

  5. 75 FR 4047 - Taking and Importing Marine Mammals; U.S. Navy Training in the Southern California Range Complex

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ... sonar use in 2009 was less than planned such that a recalculation of marine mammal takes suggests a... contemplated in light of the overall underuse of sonar proposed and actually used in 2009 (and the likelihood... sonar sources in 2009, the authorization of the same amount of take for 2010 as was authorized in 2009...

  6. Plasma ion-induced molecular ejection on the Galilean satellites - Energies of ejected molecules

    NASA Technical Reports Server (NTRS)

    Johnson, R. E.; Boring, J. W.; Reimann, C. T.; Barton, L. A.; Sieveka, E. M.; Garrett, J. W.; Farmer, K. R.; Brown, W. L.; Lanzerotti, L. J.

    1983-01-01

    First measurements of the energy of ejection of molecules from icy surfaces by fast incident ions are presented. Such results are needed in discussions of the Jovian and Saturnian plasma interactions with the icy satellites. In this letter parameters describing the ion-induced ejection and redistribution of molecules on the Galilean satellites are recalculated in light of the new laboratory data.

  7. SU-E-J-106: The Use of Deformable Image Registration with Cone-Beam CT for a Better Evaluation of Cumulative Dose to Organs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fillion, O; Gingras, L; Archambault, L

    2015-06-15

    Purpose: The knowledge of dose accumulation in the patient tissues in radiotherapy helps in determining the treatment outcomes. This project aims at providing a workflow to map cumulative doses that takes into account interfraction organ motion without the need for manual re-contouring. Methods: Five prostate cancer patients were studied. Each patient had a planning CT (pCT) and 5 to 13 CBCT scans. On each series, a physician contoured the prostate, rectum, bladder, seminal vesicles and the intestine. First, a deformable image registration (DIR) of the pCTs onto the daily CBCTs yielded registered CTs (rCT) . This rCT combined the accuratemore » CT numbers of the pCT with the daily anatomy of the CBCT. Second, the original plans (220 cGy per fraction for 25 fractions) were copied on the rCT for dose re-calculation. Third, the DIR software Elastix was used to find the inverse transform from the rCT to the pCT. This transformation was then applied to the rCT dose grid to map the dose voxels back to their pCT location. Finally, the sum of these deformed dose grids for each patient was applied on the pCT to calculate the actual dose delivered to organs. Results: The discrepancy between the planned D98 and D2 and these indices re-calculated on the rCT, are, on average, of −1 ± 1 cGy and 1 ± 2 cGy per fraction, respectively. For fractions with large anatomical motion, the D98 discrepancy on the re-calculated dose grid mapped onto the pCT can raise to −17 ± 4 cGy. The obtained cumulative dose distributions illustrate the same behavior. Conclusion: This approach allowed the evaluation of cumulative doses to organs with the help of uncontoured daily CBCT scans. With this workflow, the easy evaluation of doses delivered for EBRT treatments could ultimately lead to a better follow-up of prostate cancer patients.« less

  8. SU-E-T-163: Evaluation of Dose Distributions Recalculated with Per-Field Measurement Data Under the Condition of Respiratory Motion During IMRT for Liver Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, J; Yoon, M; Nam, T

    2014-06-01

    Purpose: The dose distributions within the real volumes of tumor targets and critical organs during internal target volume-based intensity-modulated radiation therapy (ITV-IMRT) for liver cancer were recalculated by applying the effects of actual respiratory organ motion, and the dosimetric features were analyzed through comparison with gating IMRT (Gate-IMRT) plan results. Methods: The 4DCT data for 10 patients who had been treated with Gate-IMRT for liver cancer were selected to create ITV-IMRT plans. The ITV was created using MIM software, and a moving phantom was used to simulate respiratory motion. The period and range of respiratory motion were recorded in allmore » patients from 4DCT-generated movie data, and the same period and range were applied when operating the dynamic phantom to realize coincident respiratory conditions in each patient. The doses were recalculated with a 3 dose-volume histogram (3DVH) program based on the per-field data measured with a MapCHECK2 2-dimensional diode detector array and compared with the DVHs calculated for the Gate-IMRT plan. Results: Although a sufficient prescription dose covered the PTV during ITV-IMRT delivery, the dose homogeneity in the PTV was inferior to that with the Gate-IMRT plan. We confirmed that there were higher doses to the organs-at-risk (OARs) with ITV-IMRT, as expected when using an enlarged field, but the increased dose to the spinal cord was not significant and the increased doses to the liver and kidney could be considered as minor when the reinforced constraints were applied during IMRT plan optimization. Conclusion: Because Gate-IMRT cannot always be considered an ideal method with which to correct the respiratory motional effect, given the dosimetric variations in the gating system application and the increased treatment time, a prior analysis for optimal IMRT method selection should be performed while considering the patient's respiratory condition and IMRT plan results.« less

  9. Influences of removing linear and nonlinear trends from climatic variables on temporal variations of annual reference crop evapotranspiration in Xinjiang, China.

    PubMed

    Li, Yi; Yao, Ning; Chau, Henry Wai

    2017-08-15

    Reference crop evapotranspiration (ET o ) is a key parameter in field irrigation scheduling, drought assessment and climate change research. ET o uses key prescribed (or fixed or reference) land surface parameters for crops. The linear and nonlinear trends in different climatic variables (CVs) affect ET o change. This research aims to reveal how ET o responds after the related CVs were linearly and nonlinearly detrended over 1961-2013 in Xinjiang, China. The ET o -related CVs included minimum (T min ), average (T ave ), and maximum air temperatures (T max ), wind speed at 2m (U 2 ), relative humidity (RH) and sunshine hour (n). ET o was calculated using the Penman-Monteith equation. A total of 29 ET o scenarios, including the original scenario, 14 scenarios in Group I (ET o was recalculated after removing linear trends from single or more CVs) and 14 scenarios in Group II (ET o was recalculated after removing nonlinear trends from the CVs), were generated. The influence of U 2 was stronger than influences of the other CVs on ET o for both Groups I and II either in northern, southern or the entirety of Xinjiang. The weak influences of increased T min , T ave and T max on increasing ET o were masked by the strong effects of decreased U 2 &n and increased RH on decreasing ET o . The effects of the trends in CVs, especially U 2 , on changing ET o were clearly shown. Without the general decreases of U 2 , ET o would have increased in the past 53years. Due to the non-monotone variations of the CVs and ET o , the results of nonlinearly detrending CVs on changing ET o in Group II should be more plausible than the results of linearly detrending CVs in Group I. The decreasing ET o led to a general relief in drought, which was indicated by the recalculated aridity index. Therefore, there would be a slightly lower risk of water utilization in Xinjiang, China. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. SU-G-TeP3-01: A New Approach for Calculating Variable Relative Biological Effectiveness in IMPT Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Randeniya, K; Grosshans, D

    2016-06-15

    Purpose: To investigate the impact of a new approach for calculating relative biological effectiveness (RBE) in intensity-modulated proton therapy (IMPT) optimization on RBE-weighted dose distributions. This approach includes the nonlinear RBE for the high linear energy transfer (LET) region, which was revealed by recent experiments at our institution. In addition, this approach utilizes RBE data as a function of LET without using dose-averaged LET in calculating RBE values. Methods: We used a two-piece function for calculating RBE from LET. Within the Bragg peak, RBE is linearly correlated to LET. Beyond the Bragg peak, we use a nonlinear (quadratic) RBE functionmore » of LET based on our experimental. The IMPT optimization was devised to incorporate variable RBE by maximizing biological effect (based on the Linear Quadratic model) in tumor and minimizing biological effect in normal tissues. Three glioblastoma patients were retrospectively selected from our institution in this study. For each patient, three optimized IMPT plans were created based on three RBE resolutions, i.e., fixed RBE of 1.1 (RBE-1.1), variable RBE based on linear RBE and LET relationship (RBE-L), and variable RBE based on linear and quadratic relationship (RBE-LQ). The RBE weighted dose distributions of each optimized plan were evaluated in terms of different RBE values, i.e., RBE-1.1, RBE-L and RBE-LQ. Results: The RBE weighted doses recalculated from RBE-1.1 based optimized plans demonstrated an increasing pattern from using RBE-1.1, RBE-L to RBE-LQ consistently for all three patients. The variable RBE (RBE-L and RBE-LQ) weighted dose distributions recalculated from RBE-L and RBE-LQ based optimization were more homogenous within the targets and better spared in the critical structures than the ones recalculated from RBE-1.1 based optimization. Conclusion: We implemented a new approach for RBE calculation and optimization and demonstrated potential benefits of improving tumor coverage and normal sparing in IMPT planning.« less

  11. Chemistry of Tertiary sediments in the surroundings of the Ries impact structure and moldavite formation revisited

    NASA Astrophysics Data System (ADS)

    Žák, Karel; Skála, Roman; Řanda, Zdeněk; Mizera, Jiří; Heissig, Kurt; Ackerman, Lukáš; Ďurišová, Jana; Jonášová, Šárka; Kameník, Jan; Magna, Tomáš

    2016-04-01

    Moldavites, tektites of the Central European strewn field, have been traditionally linked with the Ries impact structure in Germany. They are supposed to be derived mainly from the near-surface sediments of the Upper Freshwater Molasse of Miocene age that probably covered the target area before the impact. Comparison of the chemical composition of moldavites with that of inferred source materials requires recalculation of the composition of sediments to their water-, organic carbon- and carbon dioxide-free residuum. This recalculation reflects the fact that these compounds were lost almost completely from the target materials during their transformation to moldavites. Strong depletions in concentrations of many elements in moldavites relative to the source sediments (e.g., Mo, Cu, Ag, Sb, As, Fe) contrast with enrichments of several elements in moldavites (e.g., Cs, Ba, K, Rb). These discrepancies can be generally solved using two different approaches, either by involvement of a component of specific chemical composition, or by considering elemental fractionation during tektite formation. The proposed conceptual model of moldavite formation combines both approaches and is based on several steps: (i) the parent mixture (Upper Freshwater Molasse sediments as the dominant source) contained also a minor admixture of organic matter and soils; (ii) the most energetic part of the ejected matter was converted to vapor (plasma) and another part produced melt directly upon decompression; (iii) following further adiabatic decompression, the expanding vapor phase disintegrated the melt into small melt droplets and some elements were partially lost from the melt because of their volatility, or because of the volatility of their compounds, such as carbonyls of Fe and other transition metals (e.g., Ni, Co, Mo, Cr, and Cu); (iv) large positively charged ions such as Cs+, Ba2+, K+, Rb+ from the plasma portion were enriched in the late-stage condensation spherules or condensed directly onto negatively charged melt droplets; (v) simultaneously, the melt droplets coalesced into larger tektite bodies. Steps (iii)-(v) may have overlapped in time. The still melted moldavite bodies reaching their final size were reshaped by further melt flow. This melt flow was related to moldavite rotation and escape (bubbling off) of the last portion of gaseous volatiles during their flight in a low-pressure region above the dense layer of the atmosphere.

  12. Arrhenius reconsidered: astrophysical jets and the spread of spores

    NASA Astrophysics Data System (ADS)

    Sheldon, Malkah I.; Sheldon, Robert B.

    2015-09-01

    In 1871, Lord Kelvin suggested that the fossil record could be an account of bacterial arrivals on comets. In 1903, Svante Arrhenius suggested that spores could be transported on stellar winds without comets. In 1984, Sir Fred Hoyle claimed to see the infrared signature of vast clouds of dried bacteria and diatoms. In 2012, the Polonnaruwa carbonaceous chondrite revealed fossilized diatoms apparently living on a comet. However, Arrhenius' spores were thought to perish in the long transit between stars. Those calculations, however, assume that maximum velocities are limited by solar winds to ~5 km/s. Herbig-Haro objects and T-Tauri stars, however, are young stars with jets of several 100 km/s that might provide the necessary propulsion. The central engine of bipolar astrophysical jets is not presently understood, but we argue it is a kinetic plasma instability of a charged central magnetic body. We show how to make a bipolar jet in a belljar. The instability is non-linear, and thus very robust to scaling laws that map from microquasars to active galactic nuclei. We scale up to stellar sizes and recalculate the viability/transit-time for spores carried by supersonic jets, to show the viability of the Arrhenius mechanism.

  13. ELECTROKINETIC PHENOMENA. II : THE FACTOR OF PROPORTIONALITY FOR CATAPHORETIC AND ELECTROENDOSMOTIC MOBILITIES.

    PubMed

    Abramson, H A

    1930-07-20

    Two theories which predict different values for the ratio of V(E), the electroendosmotic velocity of a liquid past a surface, to V(p), the electric mobility of a particle of the same surface through the same liquid are discussed. The theory demanding that See PDF for Equation was supported by certain data of van der Grinten for a glass surface. Re-calculation of van der Grinten's data reveals that the ratio varies between 2.1 and 2.8. These results are in accord with previous data of Abramson. It is pointed out that glass is unsuitable for the investigation. The ratio See PDF for Equation is here determined for a flat surface and particles when both are covered by the same proteins. Under these conditions See PDF for Equation The theory is similarly tested for a round surface using a micro-cataphoresis cell. It is shown that See PDF for Equation for a round surface is approximately 1.00. These findings are confirmatory of previous data supporting the view that cataphoretic mobility is independent of the size and shape of the particles when all particles compared have similar surface constitutions.

  14. Vitamin E and the Healing of Bone Fracture: The Current State of Evidence

    PubMed Central

    Borhanuddin, Boekhtiar; Mohd Fozi, Nur Farhana; Naina Mohamed, Isa

    2012-01-01

    Background. The effect of vitamin E on health-related conditions has been extensively researched, with varied results. However, to date, there was no published review of the effect of vitamin E on bone fracture healing. Purpose. This paper systematically audited past studies of the effect of vitamin E on bone fracture healing. Methods. Related articles were identified from Medline, CINAHL, and Scopus databases. Screenings were performed based on the criteria that the study must be an original study that investigated the independent effect of vitamin E on bone fracture healing. Data were extracted using standardised forms, followed by evaluation of quality of reporting using ARRIVE Guidelines, plus recalculation procedure for the effect size and statistical power of the results. Results. Six animal studies fulfilled the selection criteria. The study methods were heterogeneous with mediocre reporting quality and focused on the antioxidant-related mechanism of vitamin E. The metasynthesis showed α-tocopherol may have a significant effect on bone formation during the normal bone remodeling phase of secondary bone healing. Conclusion. In general, the effect of vitamin E on bone fracture healing remained inconclusive due to the small number of heterogeneous and mediocre studies included in this paper. PMID:23304211

  15. Physical characterization of aerosol particles during the Chinese New Year’s firework events

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wang, Xuemei; Chen, Jianmin; Cheng, Tiantao; Wang, Tao; Yang, Xin; Gong, Youguo; Geng, Fuhai; Chen, Changhong

    2010-12-01

    Measurements for particles 10 nm to 10 μm were taken using a Wide-range Particle Spectrometer during the Chinese New Year (CNY) celebrations in 2009 in Shanghai, China. These celebrations provided an opportunity to study the number concentration and size distribution of particles in an especial atmospheric pollution situation due to firework displays. The firework activities had a clear contribution to the number concentration of small accumulation mode particles (100-500 nm) and PM 1 mass concentration, with a maximum total number concentration of 3.8 × 10 4 cm -3. A clear shift of particles from nucleation and Aitken mode to small accumulation mode was observed at the peak of the CNY firework event, which can be explained by reduced atmospheric lifetimes of smaller particles via the concept of the coagulation sink. High particle density (2.7 g cm -3) was identified as being particularly characteristic of the firework aerosols. Recalculated fine particles PM 1 exhibited on average above 150 μg m -3 for more than 12 hours, which was a health risk to susceptible individuals. Integral physical parameters of firework aerosols were calculated for understanding their physical properties and further model simulation.

  16. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  17. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  18. Energy-saving method for technogenic waste processing

    PubMed Central

    Dikhanbaev, Bayandy; Dikhanbaev, Aristan Bayandievich

    2017-01-01

    Dumps of a mining-metallurgical complex of post-Soviet Republics have accumulated a huge amount of technogenic waste products. Out of them, Kazakhstan alone has preserved about 20 billion tons. In the field of technogenic waste treatment, there is still no technical solution that leads it to be a profitable process. Recent global trends prompted scientists to focus on developing energy-saving and a highly efficient melting unit that can significantly reduce specific fuel consumption. This paper reports, the development of a new technological method—smelt layer of inversion phase. The introducing method is characterized by a combination of ideal stirring and ideal displacement regimes. Using the method of affine modelling, recalculation of pilot plant’s test results on industrial sample has been obtained. Experiments show that in comparison with bubbling and boiling layers of smelt, the degree of zinc recovery increases in the layer of inversion phase. That indicates the reduction of the possibility of new formation of zinc silicates and ferrites from recombined molecules of ZnO, SiO2, and Fe2O3. Calculations show that in industrial samples of the pilot plant, the consumption of natural gas has reduced approximately by two times in comparison with fuming-furnace. The specific fuel consumption has reduced by approximately four times in comparison with Waelz-kiln. PMID:29281646

  19. The Sampled Red List Index for Plants, phase II: ground-truthing specimen-based conservation assessments

    PubMed Central

    Brummitt, Neil; Bachman, Steven P.; Aletrari, Elina; Chadburn, Helen; Griffiths-Lee, Janine; Lutz, Maiko; Moat, Justin; Rivers, Malin C.; Syfert, Mindy M.; Nic Lughadha, Eimear M.

    2015-01-01

    The IUCN Sampled Red List Index (SRLI) is a policy response by biodiversity scientists to the need to estimate trends in extinction risk of the world's diminishing biological diversity. Assessments of plant species for the SRLI project rely predominantly on herbarium specimen data from natural history collections, in the overwhelming absence of accurate population data or detailed distribution maps for the vast majority of plant species. This creates difficulties in re-assessing these species so as to measure genuine changes in conservation status, which must be observed under the same Red List criteria in order to be distinguished from an increase in the knowledge available for that species, and thus re-calculate the SRLI. However, the same specimen data identify precise localities where threatened species have previously been collected and can be used to model species ranges and to target fieldwork in order to test specimen-based range estimates and collect population data for SRLI plant species. Here, we outline a strategy for prioritizing fieldwork efforts in order to apply a wider range of IUCN Red List criteria to assessments of plant species, or any taxa with detailed locality or natural history specimen data, to produce a more robust estimation of the SRLI. PMID:25561676

  20. ELECTRODYNAMIC CORRECTIONS TO MAGNETIC MOMENT OF ELECTRON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulehla, I.

    1960-01-01

    Values obtained for fourth-order corrections to the magnetic moment of the electron were compared and recalculated. The regularizsion for small momenta was modified so that each diverging integral was regularized by expanding the denominator by an infinitely small part. The value obtained for the magnetic moment, mu = mu /sub o/ (1 + alpha /2 pi - 0.328 alpha /sup 2// pi /sup 2/, agreed with that of Petermann. (M.C.G.)

  1. 75 FR 76624 - Airworthiness Directives; Rolls-Royce Deutschland Ltd & Co KG Models BR700-710A1-10; BR700-710A2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-09

    ... re-calculate the Declared Safe Cyclic Life (DSCL) for all BR700-710 HP turbine discs. The analysis concluded that it is required to reduce the approved life limits for the HP turbine disc part numbers that are listed in Table 1 and Table 2 of this AD (MCAI). Exceeding the revised approved life limits could...

  2. 75 FR 51693 - Airworthiness Directives; Rolls-Royce Deutschland Ltd & Co KG Models BR700-710A1-10; BR700-710A2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ... necessary to re-calculate the Declared Safe Cyclic Life (DSCL) for all BR700-710 HP turbine discs. The analysis concluded that it is required to reduce the approved life limits for the HP turbine disc part numbers that are listed in Table 1 and Table 2 of this AD (MCAI). Exceeding the revised approved life...

  3. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  4. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo‐based dose distributions

    PubMed Central

    Kapanen, Mika K.; Hyödynmaa, Simo J.; Wigren, Tuija K.; Pitkänen, Maunu A.

    2014-01-01

    The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms — pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB) — implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC‐calculated dose distributions were compared to corresponding AXB‐calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose‐volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3 mm and 2%/2 mm were applied. The AXB‐calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20‐25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type ‘a’ and/or type ‘b’ algorithms for the most recent generation type ‘c’ algorithms, such as the AXB algorithm. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.K‐, 87.55.kd, 87.55.Qr PMID:24710454

  5. Catalog of Apollo 17 rocks. Volume 1: Stations 2 and 3 (South Massif)

    NASA Technical Reports Server (NTRS)

    Ryder, Graham

    1993-01-01

    The Catalog of Apollo 17 Rocks is a set of volumes that characterize each of 334 individually numbered rock samples (79 larger than 100 g) in the Apollo 17 collection, showing what each sample is and what is known about it. Unconsolidated regolith samples are not included. The catalog is intended to be used by both researchers requiring sample allocations and a broad audience interested in Apollo 17 rocks. The volumes are arranged geographically, with separate volumes for the South Massif and Light Mantle, the North Massif, and two volumes for the mare plains. Within each volume, the samples are arranged in numerical order, closely corresponding with the sample collection stations. The present volume, for the South Massif and Light Mantle, describes the 55 individual rock fragments collected at Stations two, two-A, three, and LRV-five. Some were chipped from boulders, others collected as individual rocks, some by raking, and a few by picking from the soil in the processing laboratory. Information on sample collection, petrography, chemistry, stable and radiogenic isotopes, rock surface characteristics, physical properties, and curatorial processing is summarized and referenced as far as it is known up to early 1992. The intention has been to be comprehensive: to include all published studies of any kind that provide information on the sample, as well as some unpublished information. References which are primarily bulk interpretations of existing data or mere lists of samples are not generally included. Foreign language journals were not scrutinized, but little data appears to have been published only in such journals. We have attempted to be consistent in format across all of the volumes, and have used a common reference list that appears in all volumes. Where possible, ages based on Sr and Ar isotopes have been recalculated using the 'new' decay constants recommended by Steiger and Jager; however, in many of the reproduced diagrams the ages correspond with the 'old' decay constants. In this volume, mg' or Mg' = atomic Mg/(Mg +Fe).

  6. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  7. Detection of Low-volume Blood Loss: Compensatory Reserve Versus Traditional Vital Signs

    DTIC Science & Technology

    2014-01-01

    studies have demonstrated that photoplethysmogram (PPG) wave forms obtained with a pulse oximeter sensor significantly change with volume loss.5 With this...donation, including PPG wave forms (OEM III pulse oximeter , Nonin, Minneapolis, MN), and a noninvasive BPwave form (ccNexfin, Edwards Lifesciences, Irvine...a PPG wave form obtained with a pulse oximeter sensor. CRI is calculated after 30 heart beats and is recalculated beat-to-beat in a continuous

  8. Plasma diffusion at the magnetopause - The case of lower hybrid drift waves

    NASA Technical Reports Server (NTRS)

    Treumann, R. A.; Labelle, J.; Pottelette, R.

    1991-01-01

    The diffusion expected from the quasi-linear theory of the lower hybrid drift instability at the earth's magnetopause is recalculated. The resulting diffusion coefficient is marginally large enough to explain the thickness of the boundary layer under quiet conditions, based on observational upper limits for the wave intensities. Thus, one possible model for the boundary layer could involve equilibrium between the diffusion arising from lower hybrid waves and various loss processes.

  9. Dynamics of Rarefied Gas and Molecular Gas Dynamics.

    DTIC Science & Technology

    1983-08-25

    results of experiment and calculations according to the theory of the first intermolecular collisions and according to the theory of the viscous flows of...connection with this arises the question about the procedure of the recalculation of the results of tube experiment for the actual conditions with the...conditions of experiments did not exceed 4%. For the cones with the smaller aperture angles it was less. For measuring of the aerodynamic forces and

  10. Cenozoic Planktonic Marine Diatom Diversity and Correlation to Climate Change

    PubMed Central

    Lazarus, David; Barron, John; Renaudie, Johan; Diver, Patrick; Türke, Andreas

    2014-01-01

    Marine planktonic diatoms export carbon to the deep ocean, playing a key role in the global carbon cycle. Although commonly thought to have diversified over the Cenozoic as global oceans cooled, only two conflicting quantitative reconstructions exist, both from the Neptune deep-sea microfossil occurrences database. Total diversity shows Cenozoic increase but is sample size biased; conventional subsampling shows little net change. We calculate diversity from a separately compiled new diatom species range catalog, and recalculate Neptune subsampled-in-bin diversity using new methods to correct for increasing Cenozoic geographic endemism and decreasing Cenozoic evenness. We find coherent, substantial Cenozoic diversification in both datasets. Many living cold water species, including species important for export productivity, originate only in the latest Miocene or younger. We make a first quantitative comparison of diatom diversity to the global Cenozoic benthic ∂18O (climate) and carbon cycle records (∂13C, and 20-0 Ma pCO2). Warmer climates are strongly correlated with lower diatom diversity (raw: rho = .92, p<.001; detrended, r = .6, p = .01). Diatoms were 20% less diverse in the early late Miocene, when temperatures and pCO2 were only moderately higher than today. Diversity is strongly correlated to both ∂13C and pCO2 over the last 15 my (for both: r>.9, detrended r>.6, all p<.001), but only weakly over the earlier Cenozoic, suggesting increasingly strong linkage of diatom and climate evolution in the Neogene. Our results suggest that many living marine planktonic diatom species may be at risk of extinction in future warm oceans, with an unknown but potentially substantial negative impact on the ocean biologic pump and oceanic carbon sequestration. We cannot however extrapolate our my-scale correlations with generic climate proxies to anthropogenic time-scales of warming without additional species-specific information on proximate ecologic controls. PMID:24465441

  11. SU-E-T-399: Evaluation of Selection Criteria for Computational Human Phantoms for Use in Out-Of-Field Organ Dosimetry for Radiotherapy Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelletier, C; Jung, J; Lee, C

    2015-06-15

    Purpose: To quantify the dosimetric uncertainty due to organ position errors when using height and weight as phantom selection criteria in the UF/NCI Hybrid Phantom Library for the purpose of out-of-field organ dose reconstruction. Methods: Four diagnostic patient CT images were used to create 7-field IMRT plans. For each patient, dose to the liver, right lung, and left lung were calculated using the XVMC Monte Carlo code. These doses were taken to be the ground truth. For each patient, the phantom with the most closely matching height and weight was selected from the body size dependent phantom library. The patientmore » plans were then transferred to the computational phantoms and organ doses were recalculated. Each plan was also run on 4 additional phantoms with reference heights and or weights. Maximum and mean doses for the three organs were computed, and the DVHs were extracted and compared. One sample t-tests were performed to compare the accuracy of the height and weight matched phantoms against the additional phantoms in regards to both maximum and mean dose. Results: For one of the patients, the height and weight matched phantom yielded the most accurate results across all three organs for both maximum and mean doses. For two additional patients, the matched phantom yielded the best match for one organ only. In 13 of the 24 cases, the matched phantom yielded better results than the average of the other four phantoms, though the results were only statistically significant at the .05 level for three cases. Conclusion: Using height and weight matched phantoms does yield better results in regards to out-of-field dosimetry than using average phantoms. Height and weight appear to be moderately good selection criteria, though this selection criteria failed to yield any better results for one patient.« less

  12. Cenozoic planktonic marine diatom diversity and correlation to climate change.

    PubMed

    Lazarus, David; Barron, John; Renaudie, Johan; Diver, Patrick; Türke, Andreas

    2014-01-01

    Marine planktonic diatoms export carbon to the deep ocean, playing a key role in the global carbon cycle. Although commonly thought to have diversified over the Cenozoic as global oceans cooled, only two conflicting quantitative reconstructions exist, both from the Neptune deep-sea microfossil occurrences database. Total diversity shows Cenozoic increase but is sample size biased; conventional subsampling shows little net change. We calculate diversity from a separately compiled new diatom species range catalog, and recalculate Neptune subsampled-in-bin diversity using new methods to correct for increasing Cenozoic geographic endemism and decreasing Cenozoic evenness. We find coherent, substantial Cenozoic diversification in both datasets. Many living cold water species, including species important for export productivity, originate only in the latest Miocene or younger. We make a first quantitative comparison of diatom diversity to the global Cenozoic benthic ∂(18)O (climate) and carbon cycle records (∂(13)C, and 20-0 Ma pCO2). Warmer climates are strongly correlated with lower diatom diversity (raw: rho = .92, p<.001; detrended, r = .6, p = .01). Diatoms were 20% less diverse in the early late Miocene, when temperatures and pCO2 were only moderately higher than today. Diversity is strongly correlated to both ∂(13)C and pCO2 over the last 15 my (for both: r>.9, detrended r>.6, all p<.001), but only weakly over the earlier Cenozoic, suggesting increasingly strong linkage of diatom and climate evolution in the Neogene. Our results suggest that many living marine planktonic diatom species may be at risk of extinction in future warm oceans, with an unknown but potentially substantial negative impact on the ocean biologic pump and oceanic carbon sequestration. We cannot however extrapolate our my-scale correlations with generic climate proxies to anthropogenic time-scales of warming without additional species-specific information on proximate ecologic controls.

  13. Fiber laser welding of austenitic steel and commercially pure copper butt joint

    NASA Astrophysics Data System (ADS)

    Kuryntsev, S. V.; Morushkin, A. E.; Gilmutdinov, A. Kh.

    2017-03-01

    The fiber laser welding of austenitic stainless steel and commercially pure copper in butt joint configuration without filler or intermediate material is presented. In order to melt stainless steel directly and melt copper via heat conduction a defocused laser beam was used with an offset to stainless steel. During mechanical tests the weld seam was more durable than heat affected zone of copper so samples without defects could be obtained. Three process variants of offset of the laser beam were applied. The following tests were conducted: tensile test of weldment, intermediate layer microhardness, optical metallography, study of the chemical composition of the intermediate layer, fractography. Measurements of electrical resistivity coefficients of stainless steel, copper and copper-stainless steel weldment were made, which can be interpreted or recalculated as the thermal conductivity coefficient. It shows that electrical resistivity coefficient of cooper-stainless steel weldment higher than that of stainless steel. The width of intermediate layer between stainless steel and commercially pure copper was 41-53 μm, microhardness was 128-170 HV0.01.

  14. [Sensitivity of four representative angular cephalometric measures].

    PubMed

    Xü, T; Ahn, J; Baumrind, S

    2000-05-01

    Examined the sensitivity of four representative cephalometric angles to the detection of different vectors of craniofacial growth. Landmark coordinate data from a stratified random sample of 48 adolescent subjects were used to calculate conventional values for changes between the pretreatment and end-of-treatment lateral cephalograms. By modifying the end-of-treatment coordinate values appropriately, the angular changes could be recalculated reflecting three hypothetical situations: Case 1. What if there were no downward landmark displacement between timepoints? Case 2. What if there were no forward landmark displacement between timepoints? Case 3. What if there were no Nasion change? These questions were asked for four representative cephalometric angles: SNA, ANB, NAPg and UI-SN. For Case 1, the associations (r) between the baseline and the modified measure for the three angles were very highly significant (P < 0.001) with r2 values no lower than 0.94! For Case 2, however, the associations were much weaker and no r value reached significance. These angular measurements are less sensitive for measuring downward landmark displacement than they are for measuring forward landmark displacement.

  15. Improved minimum cost and maximum power two stage genome-wide association study designs.

    PubMed

    Stanhope, Stephen A; Skol, Andrew D

    2012-01-01

    In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.

  16. Ergonomics intervention in an Iranian television manufacturing industry.

    PubMed

    Motamedzade, M; Mohseni, M; Golmohammadi, R; Mahjoob, H

    2011-01-01

    The primary goal of this study was to use the Strain Index (SI) to assess the risk of developing upper extremity musculoskeletal disorders in a television (TV) manufacturing industry and evaluate the effectiveness of an educational intervention. The project was designed and implemented in two stages. In first stage, the SI score was calculated and the Nordic Musculoskeletal Questionnaire (NMQ) was completed. Following this, hazardous jobs were identified and existing risk factors in these jobs were studied. Based on these data, an educational intervention was designed and implemented. In the second stage, three months after implementing the interventions, the SI score was re-calculated and the Nordic Musculoskeletal Questionnaire (NMQ) completed again. 80 assembly workers of an Iranian TV manufacturing industry were randomly selected using simple random sampling approach. The results showed that the SI score had a good correlation with the symptoms of musculoskeletal disorders. It was also observed that the difference between prevalence of signs and symptoms of musculoskeletal disorders, before and after intervention, was significantly reduced. A well conducted implementation of an interventional program with total participation of all stakeholders can lead to a decrease in musculoskeletal disorders.

  17. Strong lensing probability in TeVeS (tensor-vector-scalar) theory

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ming

    2008-01-01

    We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with Ωb = 0.04 and ΩΛ = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We compare our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function μ(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.

  18. A trichrome beam model for biological dose calculation in scanned carbon-ion radiotherapy treatment planning.

    PubMed

    Inaniwa, T; Kanematsu, N

    2015-01-07

    In scanned carbon-ion (C-ion) radiotherapy, some primary C-ions undergo nuclear reactions before reaching the target and the resulting particles deliver doses to regions at a significant distance from the central axis of the beam. The effects of these particles on physical dose distribution are accounted for in treatment planning by representing the transverse profile of the scanned C-ion beam as the superposition of three Gaussian distributions. In the calculation of biological dose distribution, however, the radiation quality of the scanned C-ion beam has been assumed to be uniform over its cross-section, taking the average value over the plane at a given depth (monochrome model). Since these particles, which have relatively low radiation quality, spread widely compared to the primary C-ions, the radiation quality of the beam should vary with radial distance from the central beam axis. To represent its transverse distribution, we propose a trichrome beam model in which primary C-ions, heavy fragments with atomic number Z ≥ 3, and light fragments with Z ≤ 2 are assigned to the first, second, and third Gaussian components, respectively. Assuming a realistic beam-delivery system, we performed computer simulations using Geant4 Monte Carlo code for analytical beam modeling of the monochrome and trichrome models. The analytical beam models were integrated into a treatment planning system for scanned C-ion radiotherapy. A target volume of 20  ×  20  ×  40 mm(3) was defined within a water phantom. A uniform biological dose of 2.65 Gy (RBE) was planned for the target with the two beam models based on the microdosimetric kinetic model (MKM). The plans were recalculated with Geant4, and the recalculated biological dose distributions were compared with the planned distributions. The mean target dose of the recalculated distribution with the monochrome model was 2.72 Gy (RBE), while the dose with the trichrome model was 2.64 Gy (RBE). The monochrome model underestimated the RBE within the target due to the assumption of no radial variations in radiation quality. Conversely, the trichrome model accurately predicted the RBE even in a small target. Our results verify the applicability of the trichrome model for clinical use in C-ion radiotherapy treatment planning.

  19. A trichrome beam model for biological dose calculation in scanned carbon-ion radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.

    2015-01-01

    In scanned carbon-ion (C-ion) radiotherapy, some primary C-ions undergo nuclear reactions before reaching the target and the resulting particles deliver doses to regions at a significant distance from the central axis of the beam. The effects of these particles on physical dose distribution are accounted for in treatment planning by representing the transverse profile of the scanned C-ion beam as the superposition of three Gaussian distributions. In the calculation of biological dose distribution, however, the radiation quality of the scanned C-ion beam has been assumed to be uniform over its cross-section, taking the average value over the plane at a given depth (monochrome model). Since these particles, which have relatively low radiation quality, spread widely compared to the primary C-ions, the radiation quality of the beam should vary with radial distance from the central beam axis. To represent its transverse distribution, we propose a trichrome beam model in which primary C-ions, heavy fragments with atomic number Z ≥ 3, and light fragments with Z ≤ 2 are assigned to the first, second, and third Gaussian components, respectively. Assuming a realistic beam-delivery system, we performed computer simulations using Geant4 Monte Carlo code for analytical beam modeling of the monochrome and trichrome models. The analytical beam models were integrated into a treatment planning system for scanned C-ion radiotherapy. A target volume of 20  ×  20  ×  40 mm3 was defined within a water phantom. A uniform biological dose of 2.65 Gy (RBE) was planned for the target with the two beam models based on the microdosimetric kinetic model (MKM). The plans were recalculated with Geant4, and the recalculated biological dose distributions were compared with the planned distributions. The mean target dose of the recalculated distribution with the monochrome model was 2.72 Gy (RBE), while the dose with the trichrome model was 2.64 Gy (RBE). The monochrome model underestimated the RBE within the target due to the assumption of no radial variations in radiation quality. Conversely, the trichrome model accurately predicted the RBE even in a small target. Our results verify the applicability of the trichrome model for clinical use in C-ion radiotherapy treatment planning.

  20. Effects of inaccuracies in arterial path length measurement on differences in MRI and tonometry measured pulse wave velocity.

    PubMed

    Weir-McCall, Jonathan R; Khan, Faisel; Cassidy, Deirdre B; Thakur, Arsh; Summersgill, Jennifer; Matthew, Shona Z; Adams, Fiona; Dove, Fiona; Gandy, Stephen J; Colhoun, Helen M; Belch, Jill Jf; Houston, J Graeme

    2017-05-10

    Carotid-femoral pulse wave velocity (cf-PWV) and aortic PWV measured using MRI (MRI-PWV) show good correlation, but with a significant and consistent bias across studies. The aim of the current study was to evaluate whether the differences between cf.-PWV and MRI-PWV can be accounted for by inaccuracies of currently used distance measurements. One hundred fourteen study participants were recruited into one of 4 groups: Type 2 diabetes melltus (T2DM) with cardiovascular disease (CVD) (n = 23), T2DM without CVD (n = 41), CVD without T2DM (n = 25) and a control group (n = 25). All participants underwent cf.-PWV, cardiac MRI and whole body MR angiography(WB-MRA). 90 study participants also underwent aortic PWV using MRI. cf.-PWV EXT was performed using a SphygmoCor device (Atcor Medical, West Ryde, Australia). The true intra-arterial pathlength was measured using the WB-MRA and then used to recalculate the cf.-PWV EXT to give a cf.-PWV MRA . Distance measurements were significantly lower on WB-MRA than on external tape measure (mean diff = -85.4 ± 54.0 mm,p < 0.001). MRI-PWV was significantly lower than cf.-PWV EXT (MRI-PWV = 8.1 ± 2.9 vs. cf.-PWV EXT  = 10.9 ± 2.7 ms -1 ,p < 0.001). When cf.-PWV was recalculated using the inter-arterial distance from WB-MRA, this difference was significantly reduced but not lost (MRI-PWV = 8.1 ± 2.9 ms -1 vs. cf.-PWV MRA 9.1 ± 2.1 ms -1 , mean diff = -0.96 ± 2.52 ms -1 ,p = 0.001). Recalculation of the PWV increased correlation with age and pulse pressure. Differences in cf.-PWV and MRI PWV can be predominantly but not entirely explained by inaccuracies introduced by the use of simple surface measurements to represent the convoluted arterial path between the carotid and femoral arteries.

  1. GPS Enabled Semi-Autonomous Robot

    DTIC Science & Technology

    2017-09-01

    equal and the goal has not yet been reached (i.e., any time the robot has reached a local minimum), and direct the robot to travel in a specific...whether the robot was turning or not. The challenge is overcome by ensuring the robot travels at its maximum speed at all times . Further research into...robot’s fixed reference frame was recalculated each time through the control loop. If the encoder data allows for the robot to appear to have travelled

  2. DNA Statistical Evidence and the Ceiling Principle: Science or Science Fiction

    DTIC Science & Technology

    1994-03-01

    defects were caused by Bendectin , a drug made by the defendant. At trial, Merrell Dow introduced an affidavit from an expert who had reviewed more...than thirty published studies of the drug and found no evidence linking Bendectin to birth defects. He concluded that the drug posed no risk to fetuses...link between Bendectin and the childrens’ deformities. The trial court termed the plaintiffs’ studies unpublished and non-peer-reviewed recalculations

  3. A Model of Human Cognitive Behavior in Writing Code for Computer Programs. Volume 1

    DTIC Science & Technology

    1975-05-01

    nearly all programming languages, each line of code actually involves a great many decisions - basic statement types, variable and expression choices...labels, etc. - and any heuristic which evaluates code on the basis of a single decision is not likely to have sufficient power. Only the use of plans...recalculated in the following line because It was needed again. The second reason is that there are some decisions about the structure of a program

  4. The general ventilation multipliers calculated by using a standard Near-Field/Far-Field model.

    PubMed

    Koivisto, Antti J; Jensen, Alexander C Ø; Koponen, Ismo K

    2018-05-01

    In conceptual exposure models, the transmission of pollutants in an imperfectly mixed room is usually described with general ventilation multipliers. This is the approach used in the Advanced REACH Tool (ART) and Stoffenmanager® exposure assessment tools. The multipliers used in these tools were reported by Cherrie (1999; http://dx.doi.org/10.1080/104732299302530 ) and Cherrie et al. (2011; http://dx.doi.org/10.1093/annhyg/mer092 ) who developed them by positing input values for a standard Near-Field/Far-Field (NF/FF) model and then calculating concentration ratios between NF and FF concentrations. This study revisited the calculations that produce the multipliers used in ART and Stoffenmanager and found that the recalculated general ventilation multipliers were up to 2.8 times (280%) higher than the values reported by Cherrie (1999) and the recalculated NF and FF multipliers for 1-hr exposure were up to 1.2 times (17%) smaller and for 8-hr exposure up to 1.7 times (41%) smaller than the values reported by Cherrie et al. (2011). Considering that Stoffenmanager and the ART are classified as higher-tier regulatory exposure assessment tools, the errors is general ventilation multipliers should not be ignored. We recommend revising the general ventilation multipliers. A better solution is to integrate the NF/FF model to Stoffenmanager and the ART.

  5. Estimating NOx emissions and surface concentrations at high spatial resolution using OMI

    NASA Astrophysics Data System (ADS)

    Goldberg, D. L.; Lamsal, L. N.; Loughner, C.; Swartz, W. H.; Saide, P. E.; Carmichael, G. R.; Henze, D. K.; Lu, Z.; Streets, D. G.

    2017-12-01

    In many instances, NOx emissions are not measured at the source. In these cases, remote sensing techniques are extremely useful in quantifying NOx emissions. Using an exponential modified Gaussian (EMG) fitting of oversampled Ozone Monitoring Instrument (OMI) NO2 data, we estimate NOx emissions and lifetimes in regions where these emissions are uncertain. This work also presents a new high-resolution OMI NO2 dataset derived from the NASA retrieval that can be used to estimate surface level concentrations in the eastern United States and South Korea. To better estimate vertical profile shape factors, we use high-resolution model simulations (Community Multi-scale Air Quality (CMAQ) and WRF-Chem) constrained by in situ aircraft observations to re-calculate tropospheric air mass factors and tropospheric NO2 vertical columns during summertime. The correlation between our satellite product and ground NO2 monitors in urban areas has improved dramatically: r2 = 0.60 in new product, r2 = 0.39 in operational product, signifying that this new product is a better indicator of surface concentrations than the operational product. Our work emphasizes the need to use both high-resolution and high-fidelity models in order to re-calculate vertical column data in areas with large spatial heterogeneities in NOx emissions. The methodologies developed in this work can be applied to other world regions and other satellite data sets to produce high-quality region-specific emissions estimates.

  6. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR.

    PubMed

    van der Schot, Gijs; Bonvin, Alexandre M J J

    2015-08-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665-1674, 2005b, doi: 10.1021/ja047109h). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27-35, 2013, doi: 10.1007/s10858-013-9762-6), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution.

  7. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  8. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care.

    PubMed

    Ivers, Noah M; Grimshaw, Jeremy M; Jamtvedt, Gro; Flottorp, Signe; O'Brien, Mary Ann; French, Simon D; Young, Jane; Odgaard-Jensen, Jan

    2014-11-01

    This paper extends the findings of the Cochrane systematic review of audit and feedback on professional practice to explore the estimate of effect over time and examine whether new trials have added to knowledge regarding how optimize the effectiveness of audit and feedback. We searched the Cochrane Central Register of Controlled Trials, MEDLINE, and EMBASE for randomized trials of audit and feedback compared to usual care, with objectively measured outcomes assessing compliance with intended professional practice. Two reviewers independently screened articles and abstracted variables related to the intervention, the context, and trial methodology. The median absolute risk difference in compliance with intended professional practice was determined for each study, and adjusted for baseline performance. The effect size across studies was recalculated as studies were added to the cumulative analysis. Meta-regressions were conducted for studies published up to 2002, 2006, and 2010 in which characteristics of the intervention, the recipients, and trial risk of bias were tested as predictors of effect size. Of the 140 randomized clinical trials (RCTs) included in the Cochrane review, 98 comparisons from 62 studies met the criteria for inclusion. The cumulative analysis indicated that the effect size became stable in 2003 after 51 comparisons from 30 trials. Cumulative meta-regressions suggested new trials are contributing little further information regarding the impact of common effect modifiers. Feedback appears most effective when: delivered by a supervisor or respected colleague; presented frequently; featuring both specific goals and action-plans; aiming to decrease the targeted behavior; baseline performance is lower; and recipients are non-physicians. There is substantial evidence that audit and feedback can effectively improve quality of care, but little evidence of progress in the field. There are opportunity costs for patients, providers, and health care systems when investigators test quality improvement interventions that do not build upon, or contribute toward, extant knowledge.

  9. SU-E-J-164: Estimation of DVH Variation for PTV Due to Interfraction Organ Motion in Prostate VMAT Using Gaussian Error Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, C; Jiang, R; Chow, J

    2015-06-15

    Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describingmore » the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.« less

  10. Defect investigations of micron sized precipitates in Al alloys

    NASA Astrophysics Data System (ADS)

    Klobes, B.; Korff, B.; Balarisi, O.; Eich, P.; Haaks, M.; Kohlbach, I.; Maier, K.; Sottong, R.; Staab, T. E. M.

    2011-01-01

    A lot of light aluminium alloys achieve their favourable mechanical properties, especially their high strength, due to precipitation of alloying elements. This class of age hardenable Al alloys includes technologically important systems such as e.g. Al-Mg-Si or Al-Cu. During ageing different precipitates are formed according to a specific precipitation sequence, which is always directed onto the corresponding intermetallic equilibrium phase. Probing the defect state of individual precipitates requires high spatial resolution as well as high chemical sensitivity. Both can be achieved using the finely focused positron beam provided by the Bonn Positron Microprobe (BPM) [1] in combination with the High Momentum Analysis (HMA) [2]. Employing the BPM, structures in the micron range can be probed by means of the spectroscopy of the Doppler broadening of annihilation radiation (DBAR). On the basis of these prerequisites single precipitates of intermetallic phases in Al-Mg-Si and Al-Cu, i.e. Mg2Si and Al2Cu, were probed. A detailed interpretation of these measurements necessarily relies on theoretical calculations of the DBAR of possible annihilation sites. These were performed employing the DOPPLER program. However, previous to the DBAR calculation the structures, which partly contain vacancies, were relaxed using the ab-initio code SIESTA, i.e. the atomic positions in presence of a vacancy were recalculated.

  11. Comparison of Benchtop Fourier-Transform (FT) and Portable Grating Scanning Spectrometers for Determination of Total Soluble Solid Contents in Single Grape Berry (Vitis vinifera L.) and Calibration Transfer.

    PubMed

    Xiao, Hui; Sun, Ke; Sun, Ye; Wei, Kangli; Tu, Kang; Pan, Leiqing

    2017-11-22

    Near-infrared (NIR) spectroscopy was applied for the determination of total soluble solid contents (SSC) of single Ruby Seedless grape berries using both benchtop Fourier transform (VECTOR 22/N) and portable grating scanning (SupNIR-1500) spectrometers in this study. The results showed that the best SSC prediction was obtained by VECTOR 22/N in the range of 12,000 to 4000 cm -1 (833-2500 nm) for Ruby Seedless with determination coefficient of prediction (R p ²) of 0.918, root mean squares error of prediction (RMSEP) of 0.758% based on least squares support vector machine (LS-SVM). Calibration transfer was conducted on the same spectral range of two instruments (1000-1800 nm) based on the LS-SVM model. By conducting Kennard-Stone (KS) to divide sample sets, selecting the optimal number of standardization samples and applying Passing-Bablok regression to choose the optimal instrument as the master instrument, a modified calibration transfer method between two spectrometers was developed. When 45 samples were selected for the standardization set, the linear interpolation-piecewise direct standardization (linear interpolation-PDS) performed well for calibration transfer with R p ² of 0.857 and RMSEP of 1.099% in the spectral region of 1000-1800 nm. And it was proved that re-calculating the standardization samples into master model could improve the performance of calibration transfer in this study. This work indicated that NIR could be used as a rapid and non-destructive method for SSC prediction, and provided a feasibility to solve the transfer difficulty between totally different NIR spectrometers.

  12. Estimating acute and chronic exposure of children and adults to chlorpyrifos in fruit and vegetables based on the new, lower toxicology data.

    PubMed

    Mojsak, Patrycja; Łozowicka, Bożena; Kaczyński, Piotr

    2018-05-09

    This paper presents, for the first time, results for chlorpyrifos (CHLP) in Polish fruits and vegetables over the course of a long period of research, 2007-2016, with toxicological aspects. The challenge of this study was to re-evaluate the impact of chlorpyrifos residues in fruit and vegetables on health risk assessed via acute and chronic exposure based on old and new, lower, established values of: Average Daily Intakes (ADIs)/Acute Reference Doses (ARfDs) and Maximum Residue Levels (MRLs). A total of 3 530 samples were collected, and CHLP in the range of 0.005-1.514 mg/kg was present in 10.2% of all samples. The MRL was exceeded in 0.7% of all samples (MRL established in 2009-2015), and recalculation yielded a much greater number of violations for the new MRL (2016), which exceeded 2.9% of all samples. Acute exposure to CHLP calculated according to the old, higher toxicological data (0.10 mg/kg bw/day), does not exceed 14% of its respective ARfDs for adults and both groups of children, but when calculated for incidental cases according to the current value (ARfD 0.005 mg/kg bw) for infants and toddlers, was above 100% of its respective ARfDs in: white cabbage (263.65% and 108.24%), broccoli (216.80% and 194.72%) and apples (153.20% and 167.70%). The chronic exposure calculated for both newly established ADI values (0.001 mg/kg bw/day and 0.100 mg/kg bw/day) appears to be relatively low for adults and children. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Investigating CT to CBCT image registration for head and neck proton therapy as a tool for daily dose recalculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landry, Guillaume, E-mail: g.landry@lmu.de; Nijhuis, Reinoud; Thieke, Christian

    2015-03-15

    Purpose: Intensity modulated proton therapy (IMPT) of head and neck (H and N) cancer patients may be improved by plan adaptation. The decision to adapt the treatment plan based on a dose recalculation on the current anatomy requires a diagnostic quality computed tomography (CT) scan of the patient. As gantry-mounted cone beam CT (CBCT) scanners are currently being offered by vendors, they may offer daily or weekly updates of patient anatomy. CBCT image quality may not be sufficient for accurate proton dose calculation and it is likely necessary to perform CBCT CT number correction. In this work, the authors investigatedmore » deformable image registration (DIR) of the planning CT (pCT) to the CBCT to generate a virtual CT (vCT) to be used for proton dose recalculation. Methods: Datasets of six H and N cancer patients undergoing photon intensity modulated radiation therapy were used in this study to validate the vCT approach. Each dataset contained a CBCT acquired within 3 days of a replanning CT (rpCT), in addition to a pCT. The pCT and rpCT were delineated by a physician. A Morphons algorithm was employed in this work to perform DIR of the pCT to CBCT following a rigid registration of the two images. The contours from the pCT were deformed using the vector field resulting from DIR to yield a contoured vCT. The DIR accuracy was evaluated with a scale invariant feature transform (SIFT) algorithm comparing automatically identified matching features between vCT and CBCT. The rpCT was used as reference for evaluation of the vCT. The vCT and rpCT CT numbers were converted to stopping power ratio and the water equivalent thickness (WET) was calculated. IMPT dose distributions from treatment plans optimized on the pCT were recalculated with a Monte Carlo algorithm on the rpCT and vCT for comparison in terms of gamma index, dose volume histogram (DVH) statistics as well as proton range. The DIR generated contours on the vCT were compared to physician-drawn contours on the rpCT. Results: The DIR accuracy was better than 1.4 mm according to the SIFT evaluation. The mean WET differences between vCT (pCT) and rpCT were below 1 mm (2.6 mm). The amount of voxels passing 3%/3 mm gamma criteria were above 95% for the vCT vs rpCT. When using the rpCT contour set to derive DVH statistics from dose distributions calculated on the rpCT and vCT the differences, expressed in terms of 30 fractions of 2 Gy, were within [−4, 2 Gy] for parotid glands (D{sub mean}), spinal cord (D{sub 2%}), brainstem (D{sub 2%}), and CTV (D{sub 95%}). When using DIR generated contours for the vCT, those differences ranged within [−8, 11 Gy]. Conclusions: In this work, the authors generated CBCT based stopping power distributions using DIR of the pCT to a CBCT scan. DIR accuracy was below 1.4 mm as evaluated by the SIFT algorithm. Dose distributions calculated on the vCT agreed well to those calculated on the rpCT when using gamma index evaluation as well as DVH statistics based on the same contours. The use of DIR generated contours introduced variability in DVH statistics.« less

  14. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  15. The recalculation of the original pulse produced by a partial discharge

    NASA Technical Reports Server (NTRS)

    Tanasescu, F.

    1978-01-01

    The loads on a dielectric or an insulation arrangement cannot be precisely rated without properly assessing the manner in which a pulse produced by a partial discharge is transmitted from the point of the event to the point where it is recorded. A number of analytical and graphic methods are presented, and computer simulations are used for specific cases of a few measurement circuits. It turns out to be possible to determine the effect of each circuit element and thus make some valid corrections.

  16. Photon dispersion associated with optic-vibrations

    NASA Astrophysics Data System (ADS)

    Feng, P. X.

    1999-05-01

    In this communication, an effect of the damping coefficient on the dielectric function and dispersion is discussed. We recalculate Li's result [Li Xin-Qi, Yasuhiko Arakawa, Solid State Commun., 108 (1998) 211] and present a more general dielectric function associated with optic-vibrations. The relation between the phonon wavevector and the dispersion has also been obtained. The theoretical results show that the wavevector will obviously affect the profile of the dielectric function and result in the peak of the profile shift and increasing.

  17. Communication: Vibrational sum-frequency spectrum of the air-water interface, revisited

    NASA Astrophysics Data System (ADS)

    Ni, Yicun; Skinner, J. L.

    2016-07-01

    Before 2015, heterodyne-detected sum-frequency-generation experiments on the air-water interface showed the presence of a positive feature at low frequency in the imaginary part of the susceptibility. However, three very recent experiments indicate that this positive feature is in fact absent. Armed with a better understanding, developed by others, of how to calculate sum-frequency spectra, we recalculate the spectrum and find good agreement with these new experiments. In addition, we provide a revised interpretation of the spectrum.

  18. The importance of pre-annealing treatment for ESR dating of mollusc shells: A key study for İsmil in Konya closed Basin/Turkey

    NASA Astrophysics Data System (ADS)

    Ekici, Gamze; Sayin, Ulku; Aydin, Hulya; Isik, Mesut; Kapan, Sevinc; Demir, Ahmet; Engin, Birol; Delikan, Arif; Orhan, Hukmu; Biyik, Recep; Ozmen, Ayhan

    2018-02-01

    In this study, Electron Spin Resonance (ESR) spectroscopy is used to determine the geological ages of fossil mollusc shells systematically collected from two different geological splitting at İsmil Location (37.72769° N, 33.17781° E) in eastern part of Konya. According to the assessment of obtained ESR ages, the importance of pre-annealing treatment emphasize in the case of g=2.0007 dating signal is overlapped with the other signals arisen from short lived radicals that cause the wrong age calculation. To overcome this problem, the samples are pre-annealed at 180°C for 16 minutes and, in this case ESR ages are re-calculated for g=1.9973 dating signal. Dose response curves are obtained using 1.9973 signals after pre-annealing treatments for each samples. ESR ages of samples are obtained in the range of 138 ± 38 ka and 132 ± 30 ka (Upper Pleistocene) according to the Early Uranium Uptake model and the results are in good agreement with the estimated ages from stratigraphic and paleontological correlation by geologists. Thus, it is suggested that especially in the case of 2.0007 dating signal cannot been used due to superimposition case, the signal with 1.9973 g value can be used for dating after pre-annealing treatment. The results reports the first ESR ages on shells collected from İsmil Location and highlight the importance of pre-annealing treatment. This study is supported by TUBITAK 114Y237 research project.

  19. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  20. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. The Triton: Design concepts and methods

    NASA Technical Reports Server (NTRS)

    Meholic, Greg; Singer, Michael; Vanryn, Percy; Brown, Rhonda; Tella, Gustavo; Harvey, Bob

    1992-01-01

    During the design of the C & P Aerospace Triton, a few problems were encountered that necessitated changes in the configuration. After the initial concept phase, the aspect ratio was increased from 7 to 7.6 to produce a greater lift to drag ratio (L/D = 13) which satisfied the horsepower requirements (118 hp using the Lycoming O-235 engine). The initial concept had a wing planform area of 134 sq. ft. Detailed wing sizing analysis enlarged the planform area to 150 sq. ft., without changing its layout or location. The most significant changes, however, were made just prior to inboard profile design. The fuselage external diameter was reduced from 54 to 50 inches to reduce drag to meet the desired cruise speed of 120 knots. Also, the nose was extended 6 inches to accommodate landing gear placement. Without the extension, the nosewheel received an unacceptable percentage (25 percent) of the landing weight. The final change in the configuration was made in accordance with the stability and control analysis. In order to reduce the static margin from 20 to 13 percent, the horizontal tail area was reduced from 32.02 to 25.0 sq. ft. The Triton meets all the specifications set forth in the design criteria. If time permitted another iteration of the calculations, two significant changes would be made. The vertical stabilizer area would be reduced to decrease the aircraft lateral stability slope since the current value was too high in relation to the directional stability slope. Also, the aileron size would be decreased to reduce the roll rate below the current 106 deg/second. Doing so would allow greater flap area (increasing CL(sub max)) and thus reduce the overall wing area. C & P would also recalculate the horsepower and drag values to further validate the 120 knot cruising speed.

  2. Differences in Relative Hippocampus Volume and Number of Hippocampus Neurons among Five Corvid Species

    PubMed Central

    Gould, Kristy L.; Gilbertson, Karl E.; Seyfer, Abigail L.; Brantner, Rose M.; Hrvol, Andrew J.; Kamil, Alan C.; Nelson, Joseph C.

    2016-01-01

    The relative size of the avian hippocampus (Hp) has been shown to be related to spatial memory and food storing in two avian families, the parids and corvids. Basil et al. [Brain Behav Evol 1996;47: 156-164] examined North American food-storing birds in the corvid family and found that Clark's nutcrackers had a larger relative Hp than pinyon jays and Western scrub jays. These results correlated with the nutcracker's better performance on most spatial memory tasks and their strong reliance on stored food in the wild. However, Pravosudov and de Kort [Brain Behav Evol 67 (2006), 1-9] raised questions about the methodology used in the 1996 study, specifically the use of paraffin as an embedding material and recalculation for shrinkage. Therefore, we measured relative Hp volume using gelatin as the embedding material in four North American species of food-storing corvids (Clark's nutcrackers, pinyon jays, Western scrub jays and blue jays) and one Eurasian corvid that stores little to no food (azure-winged magpies). Although there was a significant overall effect of species on relative Hp volume among the five species, subsequent tests found only one pairwise difference, blue jays having a larger Hp than the azure-winged magpies. We also examined the relative size of the septum in the five species. Although Shiflett et al. [J Neurobiol 51 (2002), 215-222] found a difference in relative septum volume amongst three species of parids that correlated with storing food, we did not find significant differences amongst the five species in relative septum. Finally, we calculated the number of neurons in the Hp relative to body mass in the five species and found statistically significant differences, some of which are in accord with the adaptive specialization hypothesis and some are not. PMID:23364270

  3. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  4. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  5. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  6. Attack Vulnerability of Network Controllability

    PubMed Central

    2016-01-01

    Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability. PMID:27588941

  7. Attack Vulnerability of Network Controllability.

    PubMed

    Lu, Zhe-Ming; Li, Xin-Feng

    2016-01-01

    Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability.

  8. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  10. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  11. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  12. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  14. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  15. Mode calculations in unstable resonators with flowing saturable gain. 1:hermite-gaussian expansion.

    PubMed

    Siegman, A E; Sziklas, E A

    1974-12-01

    We present a procedure for calculating the three-dimensional mode pattern, the output beam characteristics, and the power output of an oscillating high-power laser taking into account a nonuniform, transversely flowing, saturable gain medium; index inhomogeneities inside the laser resonator; and arbitrary mirror distortion and misalignment. The laser is divided into a number of axial segments. The saturated gain-and-index variation. across each short segment is lumped into a complex gain profile across the midplane of that segment. The circulating optical wave within the resonator is propagated from midplane to midplane in free-space fashion and is multiplied by the lumped complex gain profile upon passing through each midplane. After each complete round trip of the optical wave inside the resonator, the saturated gain profiles are recalculated based upon the circulating fields in the cavity. The procedure when applied to typical unstable-resonator flowing-gain lasers shows convergence to a single distorted steady-state mode of oscillation. Typical near-field and far-field results are presented. Several empirical rules of thumb for finite truncated Hermite-Gaussian expansions, including an approximate sampling theorem, have been developed as part of the calculations.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mugedo, J.Z.A.

    Termites are reported to emit large quantities of methane, carbon dioxide, carbon monoxide, hydrogen and dimethyl sulfide. The emission of other trace gases, namely C{sub 2} to C{sub 10} hydrocarbons, is also documented. We have carried out, both in the field and in the laboratory, measurements of methane emissions by Macrotermes subhyalinus (Macrotermitinae), Trinervitermes bettonianus (Termitinae), and unidentified Cubitermes and Microcerotermes species. Measured CH{sub 4} field flux rates ranged from 3.66 to 98.25g per m{sup 2} of termite mound per year. Laboratory measurements gave emission rates that ranged from 14.61 to 165.05 mg CH{sub 4} per termite per year. Gaseousmore » production in all species sampled varied both within species and from species to species. Recalculated global emission of methane from termites was found to be 14.0 x 10{sup 12} g CH{sub 4}, per year. From our study, termites contribution to atmospheric methane content is between 1.11% and 4.25% per year. This study discusses the greenhouse effects as well as photochemical disposal of methane in the lower atmosphere in the tropics and the impacts on the chemistry of HO{sub x} systems and CL{sub x} cycles.« less

  17. Experimental determination of useful resistance value during pasta dough kneading

    NASA Astrophysics Data System (ADS)

    Podgornyj, Yu I.; Martynova, T. G.; Skeeba, V. Yu; Kosilov, A. S.; Chernysheva, A. A.; Skeeba, P. Yu

    2017-10-01

    There is a large quantity of materials produced in the form of dry powder or low humidity granulated masses in the modern market, and there is a need to develop new manufacturing machinery and to renew the existing facilities involved in the production of various loose mixtures. One of the machinery upgrading tasks is enhancing its performance. In view of the fact that experimental research is not feasible in full-scale samples, an experimental installation was to be constructed. The article contains its kinematic scheme and the 3D model. The angle of the kneading blade location, the volume of the loose mixture, rotating frequency and the number of the work member double passes were chosen as variables to carry out the experiment. The technique of the experiment, which includes two stages for the rotary and reciprocating movement of the work member, was proposed. The results of the experimental data processing yield the correlations between the load characteristics of the mixer work member and the angle of the blade, the volume of the mixture and the work member rotating frequency, allowing for the recalculation of loads for this type machines.

  18. Strong lensing probability in TeVeS (tensor-vector-scalar) theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Daming, E-mail: cdm@bao.ac.cn

    2008-01-15

    We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with {Omega}{sub b} = 0.04 and {Omega}{sub {Lambda}} = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We comparemore » our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function {mu}(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.« less

  19. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  20. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  1. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  2. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  3. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  4. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  5. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  6. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  7. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  8. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    DOE PAGES

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; ...

    2017-11-06

    We present that the mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can yield erroneous results if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flightmore » MS. Here, in this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases for highly saturated species and dynamic range increased by 1–2 orders of magnitude for peptides in a blood serum sample.« less

  10. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.

    The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In thismore » method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.« less

  11. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.

    We present that the mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can yield erroneous results if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flightmore » MS. Here, in this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases for highly saturated species and dynamic range increased by 1–2 orders of magnitude for peptides in a blood serum sample.« less

  12. A discussion for stabilization time of carbon steel in atmospheric corrosion

    NASA Astrophysics Data System (ADS)

    Zhang, Zong-kai; Ma, Xiao-bing; Cai, Yi-kun

    2017-09-01

    Stabilization time is an important parameter in long-term prediction of carbon steel corrosion in atmosphere. The range of the stabilization time of carbon steel in atmospheric corrosion has been published in many scientific literatures. However, the results may not precise because engineering experiences is dominant. This paper deals with the recalculation of stabilization time based on ISO CORRAG program, and analyzes the results and makes a comparison to the data mentioned above. In addition, a new thinking to obtain stabilization time will be proposed.

  13. Stability of colloidal gold and determination of the Hamaker constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demirci, S.; Enuestuen, B.V.; Turkevich, J.

    1978-12-14

    Previous computation of stability factors of colloidal gold from coagulation data was found to be in systematic error due to an underestimation of the particle concentration by electron microscopy. A new experimental technique was developed for determination of this concentration. Stability factors were recalculated from the previous data using the correct concentration. While most of the previously reported conclusions remain unchanged, the absolute rate of fast coagulation is found to agree with that predicted by the theory. A value of the Hamaker constant was determined from the corrected data.

  14. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  15. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Rapid iterative reanalysis for automated design

    NASA Technical Reports Server (NTRS)

    Bhatia, K. G.

    1973-01-01

    A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.

  17. The efficacy of cognitive prosthetic technology for people with memory impairments: a systematic review and meta-analysis.

    PubMed

    Jamieson, Matthew; Cullen, Breda; McGee-Lennon, Marilyn; Brewster, Stephen; Evans, Jonathan J

    2014-01-01

    Technology can compensate for memory impairment. The efficacy of assistive technology for people with memory difficulties and the methodology of selected studies are assessed. A systematic search was performed and all studies that investigated the impact of technology on memory performance for adults with impaired memory resulting from acquired brain injury (ABI) or a degenerative disease were included. Two 10-point scales were used to compare each study to an ideally reported single case experimental design (SCED) study (SCED scale; Tate et al., 2008 ) or randomised control group study (PEDro-P scale; Maher, Sherrington, Herbert, Moseley, & Elkins, 2003 ). Thirty-two SCED (mean = 5.9 on the SCED scale) and 11 group studies (mean = 4.45 on the PEDro-P scale) were found. Baseline and intervention performance for each participant in the SCED studies was re-calculated using non-overlap of all pairs (Parker & Vannest, 2009 ) giving a mean score of 0.85 on a 0 to 1 scale (17 studies, n = 36). A meta-analysis of the efficacy of technology vs. control in seven group studies gave a large effect size (d = 1.27) (n = 147). It was concluded that prosthetic technology can improve performance on everyday tasks requiring memory. There is a specific need for investigations of technology for people with degenerative diseases.

  18. [Development of ophthalmologic software for handheld devices].

    PubMed

    Grottone, Gustavo Teixeira; Pisa, Ivan Torres; Grottone, João Carlos; Debs, Fernando; Schor, Paulo

    2006-01-01

    The formulas for calculation of intraocular lenses have evolved since the first theoretical formulas by Fyodorov. Among the second generation formulas, the SRK-I formula has a simple calculation, taking into account a calculation that only involved anteroposterior length, IOL constant and average keratometry. With the evolution of those formulas, complexicity increased making the reconfiguration of parameters in special situations impracticable. In this way the production and development of software for such a purpose, can help surgeons to recalculate those values if needed. To idealize, develop and test a Brazilian software for calculation of IOL dioptric power for handheld computers. For the development and programming of software for calculation of IOL, we used PocketC program (OrbWorks Concentrated Software, USA). We compared the results collected from a gold-standard device (Ultrascan/Alcon Labs) with the simulation of 100 fictitious patients, using the same IOL parameters. The results were grouped for ULTRASCAN data and SOFTWARE data. Using SRK/T formula the range of those parameters included a keratometry varying between 35 and 55D, axial length between 20 and 28 mm, IOL constants of 118.7, 118.3 and 115.8. Using Wilcoxon test, it was shown that the groups do not differ (p=0.314). We had a variation in the Ultrascan sample between 11.82 and 27.97. In the tested program sample the variation was practically similar (11.83-27.98). The average of the Ultrascan group was 20.93. The software group had a similar average. The standard deviation of the samples was also similar (4.53). The precision of IOL software for handheld devices was similar to that of the standard devices using the SRK/T formula. The software worked properly, was steady without bugs in tested models of operational system.

  19. Vision Research Literature May Not Represent the Full Intellectual Range of Autism Spectrum Disorder

    PubMed Central

    Brown, Alyse C.; Chouinard, Philippe A.; Crewther, Sheila G.

    2017-01-01

    Sensory, in particular visual processing is recognized as often perturbed in individuals with Autism Spectrum Disorder (ASD). However, in terms of the literature pertaining to visual processing, individuals in the normal intelligence range (IQ = 90–110) and above, are more frequently represented in study samples than individuals who score below normal in the borderline intellectual disability (ID) (IQ = 71–85) to ID (IQ < 70) ranges. This raises concerns as to whether or not current research is generalizable to a disorder that is often co-morbid with ID. Thus, the aim of this review is to better understand to what extent the current ASD visual processing literature is representative of the entire ASD population as either diagnosed or recognized under DSM-5. Our recalculation of ASD prevalence figures, using the criteria of DSM-5, indicates approximately 40% of the ASD population are likely to be ID although searching of the visual processing literature in ASD up to July 2016 showed that only 20% of papers included the ASD with-ID population. In the published literature, the mean IQ sampled was found to be 104, with about 80% of studies sampling from the 96–115 of the IQ range, highlighting the marked under-representation of the ID and borderline ID sections of the ASD population. We conclude that current understanding of visual processing and perception in ASD is not based on the mean IQ profile of the DSM-5 defined ASD population that now appears to lie within the borderline ID to ID range. Give the importance of the role of vision for the social and cognitive processing in ASD, we recommend accurately representing ASD via greater inclusion of individuals with IQ below 80, in future ASD research. PMID:28261072

  20. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  1. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  2. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  3. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  5. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  6. Optimizing early child development for young children with non-anemic iron deficiency in the primary care practice setting (OptEC): study protocol for a randomized controlled trial.

    PubMed

    Abdullah, Kawsari; Thorpe, Kevin E; Mamak, Eva; Maguire, Jonathon L; Birken, Catherine S; Fehlings, Darcy; Hanley, Anthony J; Macarthur, Colin; Zlotkin, Stanley H; Parkin, Patricia C

    2015-04-02

    Three decades of research suggests that prevention of iron deficiency anemia (IDA) in the primary care setting may be an unrealized and unique opportunity to prevent poor developmental outcomes in children. A longitudinal study of infants with IDA showed that the developmental disadvantage persists long term despite iron therapy. Early stages of iron deficiency, termed non-anemic iron deficiency (NAID), provide an opportunity for early detection and treatment before progression to IDA. There is little research regarding NAID, which may be associated with delayed development in young children. The aim of this study is to compare the effectiveness of four months of oral iron treatment plus dietary advice, with placebo plus dietary advice, in improving developmental outcomes in children with NAID and to conduct an internal pilot study. From a screening cohort, those identified with NAID (hemoglobin ≥110 g/L and serum ferritin <14 μg/L) are invited to participate in a pragmatic, multi-site, placebo controlled, blinded, parallel group, superiority randomized trial. Participating physicians are part of a primary healthcare research network called TARGet Kids! Children between 12 and 40 months of age and identified with NAID are randomized to receive four months of oral iron treatment at 6 mg/kg/day plus dietary advice, or placebo plus dietary advice (75 per group). The primary outcome, child developmental score, is assessed using the Mullen Scales of Early Learning at baseline and at four months after randomization. Secondary outcomes include an age appropriate behavior measure (Children's Behavior Questionnaire) and two laboratory measures (hemoglobin and serum ferritin levels). Change in developmental and laboratory measures from baseline to the end of the four-month follow-up period will be analyzed using linear regression (analysis of covariance method). This trial will provide evidence regarding the association between child development and NAID, and the effectiveness of oral iron to improve developmental outcomes in children with NAID. The sample size of the trial will be recalculated using estimates taken from an internal pilot study. This trial was registered with Clinicaltrials.gov (identifier: NCT01481766 ) on 22 November 2011.

  7. Secukinumab Versus Adalimumab for Psoriatic Arthritis: Comparative Effectiveness up to 48 Weeks Using a Matching-Adjusted Indirect Comparison.

    PubMed

    Nash, Peter; McInnes, Iain B; Mease, Philip J; Thom, Howard; Hunger, Matthias; Karabis, Andreas; Gandhi, Kunal; Mpofu, Shephard; Jugl, Steffen M

    2018-06-01

    Secukinumab and adalimumab are approved for adults with active psoriatic arthritis (PsA). In the absence of direct randomized controlled trial (RCT) data, matching-adjusted indirect comparison can estimate the comparative effectiveness in anti-tumor necrosis factor (TNF)-naïve populations. Individual patient data from the FUTURE 2 RCT (secukinumab vs. placebo; N = 299) were adjusted to match baseline characteristics of the ADEPT RCT (adalimumab vs. placebo; N = 313). Logistic regression determined adjustment weights for age, body weight, sex, race, methotrexate use, psoriasis affecting ≥ 3% of body surface area, Psoriasis Area and Severity Index score, Health Assessment Questionnaire Disability Index score, presence of dactylitis and enthesitis, and previous anti-TNF therapy. Recalculated secukinumab outcomes were compared with adalimumab outcomes at weeks 12 (placebo-adjusted), 16, 24, and 48 (nonplacebo-adjusted). After matching, the effective sample size for FUTURE 2 was 101. Week 12 American College of Rheumatology (ACR) response rates were not significantly different between secukinumab and adalimumab. Week 16 ACR 20 and 50 response rates were higher for secukinumab 150 mg than for adalimumab (P = 0.017, P = 0.033), as was ACR 50 for secukinumab 300 mg (P = 0.030). Week 24 ACR 20 and 50 were higher for secukinumab 150 mg than for adalimumab (P = 0.001, P = 0.019), as was ACR 20 for secukinumab 300 mg (P = 0.048). Week 48 ACR 20 was higher for secukinumab 150 and 300 mg than for adalimumab (P = 0.002, P = 0.027), as was ACR 50 for secukinumab 300 mg (P = 0.032). In our analysis, patients with PsA receiving secukinumab were more likely to achieve higher ACR responses through 1 year (weeks 16-48) than those treated with adalimumab. Although informative, these observations rely on a subgroup of patients from FUTURE 2 and thus should be considered interim until the ongoing head-to-head RCT EXCEED can validate these findings. Novartis Pharma AG.

  8. Technical Note: Dose effects of 1.5 T transverse magnetic field on tissue interfaces in MRI-guided radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xinfeng; Prior, Phil; Chen, Guang-Pei

    Purpose: The integration of MRI with a linear accelerator (MR-linac) offers great potential for high-precision delivery of radiation therapy (RT). However, the electron deflection resulting from the presence of a transverse magnetic field (TMF) can affect the dose distribution, particularly the electron return effect (ERE) at tissue interfaces. The purpose of the study is to investigate the dose effects of ERE at air-tissue and lung-tissue interfaces during intensity-modulated radiation therapy (IMRT) planning. Methods: IMRT and volumetric modulated arc therapy (VMAT) plans for representative pancreas, lung, breast, and head and neck (HN) cases were generated following commonly used clinical dose volumemore » (DV) criteria. In each case, three types of plans were generated: (1) the original plan generated without a TMF; (2) the reconstructed plan generated by recalculating the original plan with the presence of a TMF of 1.5 T (no optimization); and (3) the optimized plan generated by a full optimization with TMF = 1.5 T. These plans were compared using a variety of DV parameters, including V{sub 100%}, D{sub 95%}, DHI [dose heterogeneity index: (D{sub 20%}–D{sub 80%})/D{sub prescription}], D{sub max}, and D{sub 1cc} in OARs (organs at risk) and tissue interface. All the optimizations and calculations in this work were performed on static data. Results: The dose recalculation under TMF showed the presence of the 1.5 T TMF can slightly reduce V{sub 100%} and D{sub 95%} for PTV, with the differences being less than 4% for all but one lung case studied. The TMF results in considerable increases in D{sub max} and D{sub 1cc} on the skin in all cases, mostly between 10% and 35%. The changes in D{sub max} and D{sub 1cc} on air cavity walls are dependent upon site, geometry, and size, with changes ranging up to 15%. The VMAT plans lead to much smaller dose effects from ERE compared to fixed-beam IMRT in pancreas case. When the TMF is considered in the plan optimization, the dose effects of the TMF at tissue interfaces (e.g., air-cavity wall, lung-tissue interfaces, skin) are significantly reduced in most cases. Conclusions: The doses on tissue interfaces can be significantly changed by the presence of a TMF during MR-guided RT when the magnetic field is not included in plan optimization. These changes can be substantially reduced or even eliminated during VMAT/IMRT optimization that specifically considers the TMF, without deteriorating overall plan quality.« less

  9. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jingqian, W; Wang, Q; Zhang, X

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less

  10. Clinical impact of dosimetric changes for volumetric modulated arc therapy in log file-based patient dose calculations.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2017-10-01

    A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Increased delivery of condensation nuclei during the Late Heavy Bombardment to the terrestrial and martian atmospheres

    NASA Astrophysics Data System (ADS)

    Losiak, Anna

    2014-05-01

    During the period of the Late Heavy Bombardment (LHB), between 4.1 and 3.8 Ga, the impact rate within the entire Solar System was up to a few thousand times higher than the current value (Ryder 2002, Bottke et al. 2012, Fassett and Minton 2013). Multiple basin-forming events on inner planets that occurred during this time had a strong but short-lasting (up to few thousands of years) effect on atmospheres of Earth and Mars (Sleep et al. 1989, Segura et al. 2002, 2012). However, the role of the continuous flux of smaller impactors has not been assessed so far. We calculated the amount of meteoric material in the 10^-3 kg to 106 kg size range delivered to Earth and Mars during the LHB based on the impact flux at the top of the Earth's atmosphere based on results from Bland and Artemieva (2006). Those values were recalculated for Mars based on Ivanov and Hartmann (2009) and then recalculated to the LHB peak based on estimates from Ryder (2002), Bottke et al. (2012), Fassett and Minton (2013). During the LHB, the amount of meteoritic material within this size range delivered to Earth was up to ~1.7*10^10 kg/year and 1.4*10^10 kg/year for Mars. The impactors that ablate and are disrupted during atmospheric entry can serve as cloud condensation nuclei (Rosen 1968, Hunten et al. 1980, Ogurtsov and Raspopov 2011). The amount of material delivered during LHB to the upper stratosphere and lower mezosphere (Hunten et al. 1980, Bland and Artemieva 2006) is comparable to the current terrestrial annual emission of mineral cloud condensation nuclei of 0.5-8*10^12 kg/year (Tegen 2003). On Mars, the availability of condensation nuclei is one of the main factors guiding water-ice cloud formation (Montmessin et al. 2004), which is in turn one of the main climatic factors influencing the hydrological cycle (Michaels et al. 2006) and radiative balance of the planet (Haberle et al. 1999, Wordsworth et al. 2013, Urata and Toon 2013). Increased delivery of condensation nuclei during the LHB should be taken into account when constructing models of terrestrial and Martian climates around 4 Ga. Bland P.A., Artemieva N.A. (2006) Meteorit.Planet.Sci. 41:607-631. Bottke W.F. et al. (2012) Nature 485: 78-81. Fassett C.I., Minton D.A. (2013) Nat.Geosci. 6:520-524 (2013). Hunten D.M. et al. (1980) J.Atmos.Sci. 37:1342-1357. Haberle R.M. et al. (1999) J.Geophys.Res. 104:8957-8974. Ivanov B.A., Hartmann W.K. (2009) Planets and Moons: Treatise on Geophysics (eds. Spohn T.): 207-243. Michaels T.I. et al. (2006) Geophys.Res.Lett. 33:L16201. Montmessin F. et al. (2004) J.Geophys.Res. 109:E10004. Ogurtsov M.G., Raspopov O.M. (2011) Geomagnetism&Aeronomy 51:275-283. Rosen J.M. (1968) Space Sci.Rev. 9:58-89. Ryder G. (2002) J.Geophys.Res. 107: doi:10.1029/2001JE001583. Segura T.L. et al. (2002) Science 298:1977-1980. Segura T.L. et al. (2012) Icarus 220:144-148. Sleep N.S. et al. (1989) Nature 342:139-142. Tegen I. (2003) Quat.Sci.Rev. 22:1821-1834. Urata R.A., Toon O.B. (2013) Icarus 226:229-250. Wordsworth R. et al. (2012) Icarus 222:1-19.

  12. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  13. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  14. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  15. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  16. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  17. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  18. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  19. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  20. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  2. MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Wu, D; Rutel, I

    2015-06-15

    Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less

  3. A masked least-squares smoothing procedure for artifact reduction in scanning-EMG recordings.

    PubMed

    Corera, Íñigo; Eciolaza, Adrián; Rubio, Oliver; Malanda, Armando; Rodríguez-Falces, Javier; Navallas, Javier

    2018-01-11

    Scanning-EMG is an electrophysiological technique in which the electrical activity of the motor unit is recorded at multiple points along a corridor crossing the motor unit territory. Correct analysis of the scanning-EMG signal requires prior elimination of interference from nearby motor units. Although the traditional processing based on the median filtering is effective in removing such interference, it distorts the physiological waveform of the scanning-EMG signal. In this study, we describe a new scanning-EMG signal processing algorithm that preserves the physiological signal waveform while effectively removing interference from other motor units. To obtain a cleaned-up version of the scanning signal, the masked least-squares smoothing (MLSS) algorithm recalculates and replaces each sample value of the signal using a least-squares smoothing in the spatial dimension, taking into account the information of only those samples that are not contaminated with activity of other motor units. The performance of the new algorithm with simulated scanning-EMG signals is studied and compared with the performance of the median algorithm and tested with real scanning signals. Results show that the MLSS algorithm distorts the waveform of the scanning-EMG signal much less than the median algorithm (approximately 3.5 dB gain), being at the same time very effective at removing interference components. Graphical Abstract The raw scanning-EMG signal (left figure) is processed by the MLSS algorithm in order to remove the artifact interference. Firstly, artifacts are detected from the raw signal, obtaining a validity mask (central figure) that determines the samples that have been contaminated by artifacts. Secondly, a least-squares smoothing procedure in the spatial dimension is applied to the raw signal using the not contaminated samples according to the validity mask. The resulting MLSS-processed scanning-EMG signal (right figure) is clean of artifact interference.

  4. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  5. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  6. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  8. Determining the Effective Density and Stabilizer Layer Thickness of Sterically Stabilized Nanoparticles

    PubMed Central

    2016-01-01

    A series of model sterically stabilized diblock copolymer nanoparticles has been designed to aid the development of analytical protocols in order to determine two key parameters: the effective particle density and the steric stabilizer layer thickness. The former parameter is essential for high resolution particle size analysis based on analytical (ultra)centrifugation techniques (e.g., disk centrifuge photosedimentometry, DCP), whereas the latter parameter is of fundamental importance in determining the effectiveness of steric stabilization as a colloid stability mechanism. The diblock copolymer nanoparticles were prepared via polymerization-induced self-assembly (PISA) using RAFT aqueous emulsion polymerization: this approach affords relatively narrow particle size distributions and enables the mean particle diameter and the stabilizer layer thickness to be adjusted independently via systematic variation of the mean degree of polymerization of the hydrophobic and hydrophilic blocks, respectively. The hydrophobic core-forming block was poly(2,2,2-trifluoroethyl methacrylate) [PTFEMA], which was selected for its relatively high density. The hydrophilic stabilizer block was poly(glycerol monomethacrylate) [PGMA], which is a well-known non-ionic polymer that remains water-soluble over a wide range of temperatures. Four series of PGMAx–PTFEMAy nanoparticles were prepared (x = 28, 43, 63, and 98, y = 100–1400) and characterized via transmission electron microscopy (TEM), dynamic light scattering (DLS), and small-angle X-ray scattering (SAXS). It was found that the degree of polymerization of both the PGMA stabilizer and core-forming PTFEMA had a strong influence on the mean particle diameter, which ranged from 20 to 250 nm. Furthermore, SAXS was used to determine radii of gyration of 1.46 to 2.69 nm for the solvated PGMA stabilizer blocks. Thus, the mean effective density of these sterically stabilized particles was calculated and determined to lie between 1.19 g cm–3 for the smaller particles and 1.41 g cm–3 for the larger particles; these values are significantly lower than the solid-state density of PTFEMA (1.47 g cm–3). Since analytical centrifugation requires the density difference between the particles and the aqueous phase, determining the effective particle density is clearly vital for obtaining reliable particle size distributions. Furthermore, selected DCP data were recalculated by taking into account the inherent density distribution superimposed on the particle size distribution. Consequently, the true particle size distributions were found to be somewhat narrower than those calculated using an erroneous single density value, with smaller particles being particularly sensitive to this artifact. PMID:27478250

  9. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  11. The structure of the ISM in the Zone of Avoidance by high-resolution multi-wavelength observations

    NASA Astrophysics Data System (ADS)

    Tóth, L. V.; Doi, Y.; Pinter, S.; Kovács, T.; Zahorecz, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Onishi, T.

    2018-05-01

    We estimate the column density of the Galactic foreground interstellar medium (GFISM) in the direction of extragalactic sources. All-sky AKARI FIS infrared sky survey data might be used to trace the GFISM with a resolution of 2 arcminutes. The AKARI based GFISM hydrogen column density estimates are compared with similar quantities based on HI 21cm measurements of various resolution and of Planck results. High spatial resolution observations of the GFISM may be important recalculating the physical parameters of gamma-ray burst (GRB) host galaxies using the updated foreground parameters.

  12. New more accurate calculations of the ground state potential energy surface of H(3) (+).

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Leonarski, Filip; Adamowicz, Ludwik

    2009-02-21

    Explicitly correlated Gaussian functions with floating centers have been employed to recalculate the ground state potential energy surface (PES) of the H(3) (+) ion with much higher accuracy than it was done before. The nonlinear parameters of the Gaussians (i.e., the exponents and the centers) have been variationally optimized with a procedure employing the analytical gradient of the energy with respect to these parameters. The basis sets for calculating new PES points were guessed from the points already calculated. This allowed us to considerably speed up the calculations and achieve very high accuracy of the results.

  13. Evaluation of volume change in rectum and bladder during application of image-guided radiotherapy for prostate carcinoma

    NASA Astrophysics Data System (ADS)

    Luna, J. A.; Rojas, J. I.

    2016-07-01

    All prostate cancer patients from Centro Médico Radioterapia Siglo XXI receive Volumetric Modulated Arc Therapy (VMAT). This therapy uses image-guided radiotherapy (IGRT) with the Cone Beam Computed Tomography (CBCT). This study compares the planned dose in the reference CT image against the delivered dose recalculate in the CBCT image. The purpose of this study is to evaluate the anatomic changes and related dosimetric effect based on weekly CBCT directly for patients with prostate cancer undergoing volumetric modulated arc therapy (VMAT) treatment. The collected data were analyzed using one-way ANOVA.

  14. Semi-analytical model for a slab one-dimensional photonic crystal

    NASA Astrophysics Data System (ADS)

    Libman, M.; Kondratyev, N. M.; Gorodetsky, M. L.

    2018-02-01

    In our work we justify the applicability of a dielectric mirror model to the description of a real photonic crystal. We demonstrate that a simple one-dimensional model of a multilayer mirror can be employed for modeling of a slab waveguide with periodically changing width. It is shown that this width change can be recalculated to the effective refraction index modulation. The applicability of transfer matrix method of reflection properties calculation was demonstrated. Finally, our 1-D model was employed to analyze reflection properties of a 2-D structure - a slab photonic crystal with a number of elliptic holes.

  15. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  16. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  17. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  18. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  19. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  20. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  1. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  2. Regression Trees Identify Relevant Interactions: Can This Improve the Predictive Performance of Risk Adjustment?

    PubMed

    Buchner, Florian; Wasem, Jürgen; Schillo, Sonja

    2017-01-01

    Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R 2 from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R 2 improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Tracking multiple particles in fluorescence time-lapse microscopy images via probabilistic data association.

    PubMed

    Godinez, William J; Rohr, Karl

    2015-02-01

    Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.

  4. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  5. Equation of state of Mo from shock compression experiments on preheated samples

    NASA Astrophysics Data System (ADS)

    Fat'yanov, O. V.; Asimow, P. D.

    2017-03-01

    We present a reanalysis of reported Hugoniot data for Mo, including both experiments shocked from ambient temperature (T) and those preheated to 1673 K, using the most general methods of least-squares fitting to constrain the Grüneisen model. This updated Mie-Grüneisen equation of state (EOS) is used to construct a family of maximum likelihood Hugoniots of Mo from initial temperatures of 298 to 2350 K and a parameterization valid over this range. We adopted a single linear function at each initial temperature over the entire range of particle velocities considered. Total uncertainties of all the EOS parameters and correlation coefficients for these uncertainties are given. The improved predictive capabilities of our EOS for Mo are confirmed by (1) better agreement between calculated bulk sound speeds and published measurements along the principal Hugoniot, (2) good agreement between our Grüneisen data and three reported high-pressure γ ( V ) functions obtained from shock-compression of porous samples, and (3) very good agreement between our 1 bar Grüneisen values and γ ( T ) at ambient pressure recalculated from reported experimental data on the adiabatic bulk modulus K s ( T ) . Our analysis shows that an EOS constructed from shock compression data allows a much more accurate prediction of γ ( T ) values at 1 bar than those based on static compression measurements or first-principles calculations. Published calibrations of the Mie-Grüneisen EOS for Mo using static compression measurements only do not reproduce even low-pressure asymptotic values of γ ( T ) at 1 bar, where the most accurate experimental data are available.

  6. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  7. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  8. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  9. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  10. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  11. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  12. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  13. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  14. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  15. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  16. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  17. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  18. The generalized liquid drop model alpha-decay formula: Predictability analysis and superheavy element alpha half-lives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta-Schubert, N.; Reyes, M.A.

    2007-11-15

    The predictive accuracy of the generalized liquid drop model (GLDM) formula for alpha-decay half-lives has been investigated in a detailed manner and a variant of the formula with improved coefficients is proposed. The method employs the experimental alpha half-lives of the well-known alpha standards to obtain the coefficients of the analytical formula using the experimental Q{sub {alpha}} values (the DSR-E formula), as well as the finite range droplet model (FRDM) derived Q{sub {alpha}} values (the FRDM-FRDM formula). The predictive accuracy of these formulae was checked against the experimental alpha half-lives of an independent set of nuclei (TEST) that span approximatelymore » the same Z, A region as the standards and possess reliable alpha spectroscopic data, and were found to yield good results for the DSR-E formula but not for the FRDM-FRDM formula. The two formulae were used to obtain the alpha half-lives of superheavy elements (SHE) and heavy nuclides where the relative accuracy was found to be markedly improved for the FRDM-FRDM formula, which corroborates the appropriateness of the FRDM masses and the GLDM prescription for high Z, A nuclides. Further improvement resulted, especially for the FRDM-FRDM formula, after a simple linear optimization over the calculated and experimental half-lives of TEST was used to re-calculate the half-lives of the SHE and heavy nuclides. The advantage of this optimization was that it required no re-calculation of the coefficients of the basic DSR-E or FRDM-FRDM formulae. The half-lives for 324 medium-mass to superheavy alpha decaying nuclides, calculated using these formulae and the comparison with experimental half-lives, are presented.« less

  19. Finding Long Lost Lexell's Comet: The Fate of the First Discovered Near-Earth Object

    NASA Astrophysics Data System (ADS)

    Ye, Quan-Zhi; Wiegert, Paul A.; Hui, Man-To

    2018-04-01

    Jupiter-family Comet D/1770 L1 (Lexell) was the first discovered Near-Earth Object (NEO) and passed the Earth on 1770 July 1 at a recorded distance of 0.015 au. The comet was subsequently lost due to unfavorable observing circumstances during its next apparition followed by a close encounter with Jupiter in 1779. Since then, the fate of D/Lexell has attracted interest from the scientific community, and now we revisit this long-standing question. We investigate the dynamical evolution of D/Lexell based on a set of orbits recalculated using the observations made by Charles Messier, the comet’s discoverer, and find that there is a 98% chance that D/Lexell remains in the solar system by the year of 2000. This finding remains valid even if a moderate non-gravitational effect is imposed. Messier’s observations also suggest that the comet is one of the largest known near-Earth comets, with a nucleus of ≳10 km in diameter. This implies that the comet should have been detected by contemporary NEO surveys regardless of its activity level if it has remained in the inner solar system. We identify asteroid 2010 JL33 as a possible descendant of D/Lexell, with a 0.8% probability of chance alignment, but a direct orbital linkage of the two bodies has not been successfully accomplished. We also use the recalculated orbit to investigate the meteors potentially originating from D/Lexell. While no associated meteors have been unambiguously detected, we show that meteor observations can be used to better constrain the orbit of D/Lexell despite the comet being long lost.

  20. Hair-to-blood ratio and biological half-life of mercury: experimental study of methylmercury exposure through fish consumption in humans.

    PubMed

    Yaginuma-Sakurai, Kozue; Murata, Katsuyuki; Iwai-Shimada, Miyuki; Nakai, Kunihiko; Kurokawa, Naoyuki; Tatsuta, Nozomi; Satoh, Hiroshi

    2012-02-01

    The hair-to-blood ratio and biological half-life of methylmercury in a one-compartment model seem to differ between past and recent studies. To reevaluate them, 27 healthy volunteers were exposed to methylmercury at the provisional tolerable weekly intake (3.4 µg/kg body weight/week) for adults through fish consumption for 14 weeks, followed by a 15-week washout period after the cessation of exposure. Blood was collected every 1 or 2 weeks, and hair was cut every 4 weeks. Total mercury (T-Hg) concentrations were analyzed in blood and hair. The T-Hg levels of blood and hair changed with time (p < 0.001). The mean concentrations increased from 6.7 ng/g at week 0 to 26.9 ng/g at week 14 in blood, and from 2.3 to 8.8 µg/g in hair. The mean hair-to-blood ratio after the adjustment for the time lag from blood to hair was 344 ± 54 (S.D.) for the entire period. The half-lives of T-Hg were calculated from raw data to be 94 ± 23 days for blood and 102 ± 31 days for hair, but the half-lives recalculated after subtracting the background levels from the raw data were 57 ± 18 and 64 ± 22 days, respectively. In conclusion, the hair-to-blood ratio of methylmercury, based on past studies, appears to be underestimated in light of recent studies. The crude half-life may be preferred rather than the recalculated one because of the practicability and uncertainties of the background level, though the latter half-life may approximate the conventional one.

  1. The impact of using weight estimated from mammographic images vs. self-reported weight on breast cancer risk calculation

    NASA Astrophysics Data System (ADS)

    Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.

    2017-03-01

    Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average <2%, average 2-3.49%, above average 3.5-4.99%, moderate 5-7.99%, high >=8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.

  2. Reliability of a Single Light Source Purkinjemeter in Pseudophakic Eyes.

    PubMed

    Janunts, Edgar; Chashchina, Ekaterina; Seitz, Berthold; Schaeffel, Frank; Langenbucher, Achim

    2015-08-01

    To study the reliability of Purkinje image analysis for assessment of intraocular lens tilt and decentration in pseudophakic eyes. The study comprised 64 eyes of 39 patients. All eyes underwent phacoemulsification with intraocular lens implanted in the capsular bag. Lens decentration and tilt were measured multiple times by an infrared Purkinjemeter. A total of 396 measurements were performed 1 week and 1 month postoperatively. Lens tilt (Tx, Ty) and decentration (Dx, Dy) in horizontal and vertical directions, respectively, were calculated by dedicated software based on regression analysis for each measurement using only four images, and afterward, the data were averaged (mean values, MV) for repeated sequence of measurements. New software was designed by us for recalculating lens misalignment parameters offline, using a complete set of Purkinje images obtained through the repeated measurements (9 to 15 Purkinje images) (recalculated values, MV'). MV and MV' were compared using SPSS statistical software package. MV and MV' were found to be highly correlated for the Tx and Ty parameters (R2 > 0.9; p < 0.001), moderately correlated for the Dx parameter (R2 > 0.7; p < 0.001), and weakly correlated for the Dy parameter (R2 = 0.23; p < 0.05). Reliability was high (Cronbach α > 0.9) for all measured parameters. Standard deviation values were 0.86 ± 0.69 degrees, 0.72 ± 0.65 degrees, 0.04 ± 0.05 mm, and 0.23 ± 0.34 mm for Tx, Ty, Dx, and Dy, respectively. The Purkinjemeter demonstrated high reliability and reproducibility for lens misalignment parameters. To further improve reliability, we recommend capturing at least six Purkinje images instead of three.

  3. New orbit recalculations of comet C/1890 F1 Brooks and its dynamical evolution

    NASA Astrophysics Data System (ADS)

    Królikowska, Małgorzata; Dybczyński, Piotr A.

    2016-08-01

    C/1890 F1 Brooks belongs to a group of 19 comets used by Jan Oort to support his famous hypothesis on the existence of a spherical cloud containing hundreds of billions of comets with orbits of semi-major axes between 50 000 and 150 000 au. Comet Brooks stands out from this group because of a long series of astrometric observations as well as a nearly 2-yr-long observational arc. Rich observational material makes this comet an ideal target for testing the rationality of an effort to recalculate astrometric positions on the basis of original (comet-star) measurements using modern star catalogues. This paper presents the results of such a new analysis based on two different methods: (I) automatic re-reduction based on cometary positions and the (comet-star) measurements and (II) partially automatic re-reduction based on the contemporary data for the reference stars originally used. We show that both methods offer a significant reduction in the uncertainty of orbital elements. Based on the most preferred orbital solution, the dynamical evolution of comet Brooks during three consecutive perihelion passages is discussed. We conclude that C/1890 F1 is a dynamically old comet that passed the Sun at a distance below 5 au during its previous perihelion passage. Furthermore, its next perihelion passage will be a little closer than during the 1890-1892 apparition. C/1890 F1 is interesting also because it suffered extremely small planetary perturbations when it travelled through the planetary zone. Therefore, in the next passage through perihelion, it will once again be a comet from the Oort spike.

  4. Estimating the long-term costs of ischemic and hemorrhagic stroke for Australia: new evidence derived from the North East Melbourne Stroke Incidence Study (NEMESIS).

    PubMed

    Cadilhac, Dominique A; Carter, Rob; Thrift, Amanda G; Dewey, Helen M

    2009-03-01

    Stroke is associated with considerable societal costs. Cost-of-illness studies have been undertaken to estimate lifetime costs; most incorporating data up to 12 months after stroke. Costs of stroke, incorporating data collected up to 12 months, have previously been reported from the North East Melbourne Stroke Incidence Study (NEMESIS). NEMESIS now has patient-level resource use data for 5 years. We aimed to recalculate the long-term resource utilization of first-ever stroke patients and compare these to previous estimates obtained using data collected to 12 months. Population structure, life expectancy, and unit prices within the original cost-of-illness models were updated from 1997 to 2004. New Australian stroke survival and recurrence data up to 10 years were incorporated, as well as cross-sectional resource utilization data at 3, 4, and 5 years from NEMESIS. To enable comparisons, 1997 costs were inflated to 2004 prices and discounting was standardized. In 2004, 27 291 ischemic stroke (IS) and 4291 intracerebral hemorrhagic stroke (ICH) first-ever events were estimated. Average annual resource use after 12 months was AU$6022 for IS and AU$3977 for ICH. This is greater than the 1997 estimates for IS (AU$4848) and less than those for ICH (previously AU$10 692). The recalculated average lifetime costs per first-ever case differed for IS (AU$57 106 versus AU$52 855 [1997]), but differed more for ICH (AU$49 995 versus AU$92 308 [1997]). Basing lifetime cost estimates on short-term data overestimated the costs for ICH and underestimated those for IS. Patterns of resource use varied by stroke subtype and, overall, the societal cost impact was large.

  5. Recalculation of regional and detailed gravity database from Slovak Republic and qualitative interpretation of new generation Bouguer anomaly map

    NASA Astrophysics Data System (ADS)

    Pasteka, Roman; Zahorec, Pavol; Mikuska, Jan; Szalaiova, Viktoria; Papco, Juraj; Krajnak, Martin; Kusnirak, David; Panisova, Jaroslava; Vajda, Peter; Bielik, Miroslav

    2014-05-01

    In this contribution results of the running project "Bouguer anomalies of new generation and the gravimetrical model of Western Carpathians (APVV-0194-10)" are presented. The existing homogenized regional database (212478 points) was enlarged by approximately 107 500 archive detailed gravity measurements. These added gravity values were measured since the year 1976 to the present, therefore they need to be unified and reprocessed. The improved positions of more than 8500 measured points were acquired by digitizing of archive maps (we recognized some local errors within particular data sets). Besides the local errors (due to the wrong positions, heights or gravity of measured points) we have found some areas of systematic errors probably due to the gravity measurement or processing errors. Some of them were confirmed and consequently corrected by field measurements within the frame of current project. Special attention is paid to the recalculation of the terrain corrections - we have used a new developed software as well as the latest version of digital terrain model of Slovakia DMR-3. Main improvement of the new terrain corrections evaluation algorithm is the possibility to calculate it in the real gravimeter position and involving of 3D polyhedral bodies approximation (accepting the spherical approximation of Earth's curvature). We have realized several tests by means of the introduction of non-standard distant relief effects introduction. A new complete Bouguer anomalies map was constructed and transformed by means of higher derivatives operators (tilt derivatives, TDX, theta-derivatives and the new TDXAS transformation), using the regularization approach. A new interesting regional lineament of probably neotectonic character was recognized in the new map of complete Bouguer anomalies and it was confirmed also by realized in-situ field measurements.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, R; Ding, C; Jiang, S

    Purpose Spine SRS/SAbR treatment plans typically require very steep dose gradients to meet spinal cord constraints and it is crucial that the dose distribution be accurate. However, these plans are typically calculated on helical free-breathing CT scans, which often contain motion artifacts. While the spine itself doesn’t exhibit very much intra-fraction motion, tissues around the spine, particularly the liver, do move with respiration. We investigated the dosimetric effect of liver motion on dose distributions calculated on helical free-breathing CT scans for spine SAbR delivered to the T and L spine. Methods We took 5 spine SAbR plans and used densitymore » overrides to simulate an average reconstruction CT image set, which would more closely represent the patient anatomy during treatment. The value used for the density override was 0.66 g/cc. All patients were planned using our standard beam arrangement, which consists of 13 coplanar step and shoot IMRT beams. The original plan was recalculated with the same MU on the “average” scan and target coverage and spinal cord dose were compared to the original plan. Results The average changes in minimum PTV dose, PTV coverage, max cord dose and volume of cord receiving 10 Gy were 0.6%, 0.8%, 0.3% and 4.4% (0.012 cc), respectively. Conclusion SAbR spine plans are surprisingly robust relative to surrounding organ motion due to respiration. Motion artifacts in helical planning CT scans do not cause clinically significant differences when these plans are re-calculated on pseudo-average CT reconstructions. This is likely due to the beam arrangement used because only three beams pass through the liver and only one beam passes completely through the density override. The effect of the respiratory motion on VMAT plans for spine SAbR is being evaluated.« less

  7. A new DMSP magnetometer and auroral boundary data set and estimates of field-aligned currents in dynamic auroral boundary coordinates

    NASA Astrophysics Data System (ADS)

    Kilcommons, Liam M.; Redmon, Robert J.; Knipp, Delores J.

    2017-08-01

    We have developed a method for reprocessing the multidecadal, multispacecraft Defense Meteorological Satellite Program Special Sensor Magnetometer (DMSP SSM) data set and have applied it to 15 spacecraft years of data (DMSP Flight 16-18, 2010-2014). This Level-2 data set improves on other available SSM data sets with recalculated spacecraft locations and magnetic perturbations, artifact signal removal, representations of the observations in geomagnetic coordinates, and in situ auroral boundaries. Spacecraft locations have been recalculated using ground-tracking information. Magnetic perturbations (measured field minus modeled main field) are recomputed. The updated locations ensure the appropriate model field is used. We characterize and remove a slow-varying signal in the magnetic field measurements. This signal is a combination of ring current and measurement artifacts. A final artifact remains after processing: step discontinuities in the baseline caused by activation/deactivation of spacecraft electronics. Using coincident data from the DMSP precipitating electrons and ions instrument (SSJ4/5), we detect the in situ auroral boundaries with an improvement to the Redmon et al. (2010) algorithm. We embed the location of the aurora and an accompanying figure of merit in the Level-2 SSM data product. Finally, we demonstrate the potential of this new data set by estimating field-aligned current (FAC) density using the Minimum Variance Analysis technique. The FAC estimates are then expressed in dynamic auroral boundary coordinates using the SSJ-derived boundaries, demonstrating a dawn-dusk asymmetry in average FAC location relative to the equatorward edge of the aurora. The new SSM data set is now available in several public repositories.

  8. State of microbial communities in paleosols buried under kurgans of the desert-steppe zone in the Middle Bronze Age (27th-26th centuries BC) in relation to the dynamics of climate humidity

    NASA Astrophysics Data System (ADS)

    Khomutova, T. E.; Demkina, T. S.; Borisov, A. V.; Shishlina, I. I.

    2017-02-01

    The size and structure of microbial pool in light chestnut paleosols and paleosolonetz buried under kurgans of the Middle Bronze Age 4600-4500 years ago (the burial mound heights are 45-173 cm), as well as in recent analogues in the desert-steppe zone (Western Ergeni, Salo-Manych Ridge), have been studied. In paleosol profiles, the living microbial biomass estimated from the content of phospholipids varies from 35 to 258% of the present-day value; the active biomass (responsive to glucose addition) in paleosols is 1‒3 orders of magnitude lower than in recent analogues. The content of soil phospholipids is recalculated to that of microbial carbon, and its share in the total soil organic carbon is determined: it is 4.5-7.0% in recent soils and up to three times higher in the remained organic carbon of paleosols. The stability of microbial communities in the B1 horizon of paleosols is 1.3-2.2 times higher than in the upper horizon; in recent soils, it has a tendency to a decrease. The share of microorganisms feeding on plant residues in the ecological-trophic structure of paleosol microbial communities is higher by 23-35% and their index of oligotrophy is 3-5 times lower than in recent analogues. The size of microbial pool and its structure indicate a significantly higher input of plant residues into soils 4600-4500 years ago than in the recent time, which is related to the increase in atmospheric humidity in the studied zone. However, the occurrence depths of salt accumulations in profiles of the studied soils contradict this supposition. A short-term trend of increase in climate humidity is supposed, as indicated by microbial parameters (the most sensitive soil characteristics) or changes in the annual variation of precipitation (its increase in the warm season) during the construction of the mounds under study.

  9. Recommendations for dose calculations of lung cancer treatment plans treated with stereotactic ablative body radiotherapy (SABR)

    NASA Astrophysics Data System (ADS)

    Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.

    2014-03-01

    The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte-Carlo-based algorithm may be warranted.

  10. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  11. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  12. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  13. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  14. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  15. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  16. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  17. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  18. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  19. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  20. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  1. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  2. Radiation damage study of thin YAG:Ce scintillator using low-energy protons

    NASA Astrophysics Data System (ADS)

    Novotný, P.; Linhart, V.

    2017-07-01

    Radiation hardness of a 50 μ m thin YAG:Ce scintillator in a form of dependence of a signal efficiency on 3.1 MeV proton fluence was measured and analysed using X-ray beam. The signal efficiency is a ratio of signals given by a CCD chip after and before radiation damage. The CCD chip was placed outside the primary beam because of its protection from damage which could be caused by radiation. Using simplified assumptions, the 3.1 MeV proton fluences were recalculated to: ṡ 150 MeV proton fluences with intention to estimate radiation damage of this sample under conditions at proton therapy centres during medical treatment, ṡ 150 MeV proton doses with intention to give a chance to compare radiation hardness of the studied sample with radiation hardness of other detectors used in medical physics, ṡ 1 MeV neutron equivalent fluences with intention to compare radiation hardness of the studied sample with properties of position sensitive silicon and diamond detectors used in nuclear and particle physics. The following results of our research were obtained. The signal efficiency of the studied sample varies slightly (± 3%) up to 3.1 MeV proton fluence of c. (4 - 8) × 1014 cm-2. This limit is equivalent to 150 MeV proton fluence of (5 - 9) × 1016 cm-2, 150 MeV proton dose of (350 - 600) kGy and 1 MeV neutron fluence of (1 - 2) × 1016 cm-2. Beyond the limit, the signal efficiency goes gradually down. Fifty percent decrease in the signal efficiency is reached around 3.1 MeV fluence of (1 - 2) × 1016 cm-2 which is equivalent to 150 MeV proton fluence of around 2 × 1018 cm-2, 150 MeV proton dose of around 15 MGy and 1 MeV neutron equivalent fluence of (4 - 8) × 1017 cm-2. In contrast with position sensitive silicon and diamond radiation detectors, the studied sample has at least two order of magnitude greater radiation resistance. Therefore, YAG:Ce scintillator is a suitable material for monitoring of primary beams of particles of ionizing radiation.

  3. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  4. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  5. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  6. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  7. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  8. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  9. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  10. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  11. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  12. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  13. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  14. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  15. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  16. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  17. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Enhancement of SPES source performances.

    PubMed

    Fagotti, E; Palmieri, A; Ren, X

    2008-02-01

    Installation of SPES source at LNL was finished in July 2006 and the first beam was extracted in September 2006. Commissioning results confirmed very good performance of the extracted current density. Conversely, source reliability was very poor due to glow-discharge phenomena, which were caused by the ion source axial magnetic field protruding in the high-voltage column. This problem was fixed by changing the stainless steel plasma electrode support with a ferromagnetic one. This new configuration required us to recalculate ion source solenoids positions and fields in order to recover the correct resonance pattern. Details on magnetic simulations and experimental results of high voltage column shielding are presented.

  19. Method for Making Measurements of the Post-Combustion Residence Time in a Gas Turbine Engine

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey H. (Inventor)

    2017-01-01

    A method of measuring a residence time in a gas-turbine engine is disclosed that includes measuring a combustor pressure signal at a combustor entrance and a turbine exit pressure signal at a turbine exit. The method further includes computing a cross-spectrum function between the combustor pressure signal and the turbine exit pressure signal, calculating a slope of the cross-spectrum function, shifting the turbine exit pressure signal an amount corresponding to a time delay between the measurement of the combustor pressure signal and the turbine exit pressure signal, and recalculating the slope of the cross-spectrum function until the slope reaches zero.

  20. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    NASA Astrophysics Data System (ADS)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  1. The Brackets Design and Stress Analysis of a Refinery's Hot Water Pipeline

    NASA Astrophysics Data System (ADS)

    Zhou, San-Ping; He, Yan-Lin

    2016-05-01

    The reconstruction engineering which reconstructs the hot water pipeline from a power station to a heat exchange station requires the new hot water pipeline combine with old pipe racks. Taking the allowable span calculated based on GB50316 and the design philosophy of the pipeline supports into account, determine the types and locations of brackets. By analyzing the stresses of the pipeline in AutoPIPE, adjusting the supports at dangerous segments, recalculating in AutoPIPE, at last determine the types, locations and numbers of supports reasonably. Then the overall pipeline system will satisfy the requirement of the ASME B31.3.

  2. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  3. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  4. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    PubMed

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  5. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  6. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  7. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  8. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  9. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  10. Soft X-Ray Observations of a Complete Sample of X-Ray--selected BL Lacertae Objects

    NASA Astrophysics Data System (ADS)

    Perlman, Eric S.; Stocke, John T.; Wang, Q. Daniel; Morris, Simon L.

    1996-01-01

    We present the results of ROSAT PSPC observations of the X-ray selected BL Lacertae objects (XBLs) in the complete Einstein Extended Medium Sensitivity Survey (EM MS) sample. None of the objects is resolved in their respective PSPC images, but all are easily detected. All BL Lac objects in this sample are well-fitted by single power laws. Their X-ray spectra exhibit a variety of spectral slopes, with best-fit energy power-law spectral indices between α = 0.5-2.3. The PSPC spectra of this sample are slightly steeper than those typical of flat ratio-spectrum quasars. Because almost all of the individual PSPC spectral indices are equal to or slightly steeper than the overall optical to X-ray spectral indices for these same objects, we infer that BL Lac soft X-ray continua are dominated by steep-spectrum synchrotron radiation from a broad X-ray jet, rather than flat-spectrum inverse Compton radiation linked to the narrower radio/millimeter jet. The softness of the X-ray spectra of these XBLs revives the possibility proposed by Guilbert, Fabian, & McCray (1983) that BL Lac objects are lineless because the circumnuclear gas cannot be heated sufficiently to permit two stable gas phases, the cooler of which would comprise the broad emission-line clouds. Because unified schemes predict that hard self-Compton radiation is beamed only into a small solid angle in BL Lac objects, the steep-spectrum synchrotron tail controls the temperature of the circumnuclear gas at r ≤ 1018 cm and prevents broad-line cloud formation. We use these new ROSAT data to recalculate the X-ray luminosity function and cosmological evolution of the complete EMSS sample by determining accurate K-corrections for the sample and estimating the effects of variability and the possibility of incompleteness in the sample. Our analysis confirms that XBLs are evolving "negatively," opposite in sense to quasars, with Ve/Va = 0.331±0.060. The statistically significant difference between the values for X-ray and radio-selected BL Lac objects remains a difficulty for models which unify these two types of objects. We have identified one addition to the sample, so that the sample now has 23 objects. We find no evidence for a substantial number of unidentified low-luminosity BL Lac objects hidden in our sample, as had been suggested by Browne & Marcha (1993) although a few such objects may be present.

  11. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  13. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  14. How Large Should a Statistical Sample Be?

    ERIC Educational Resources Information Center

    Menil, Violeta C.; Ye, Ruili

    2012-01-01

    This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…

  15. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  16. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  17. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    NASA Astrophysics Data System (ADS)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  19. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  20. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  1. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Effect of roll hot press temperature on crystallite size of PVDF film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra

    2014-03-24

    Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less

  3. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  4. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  5. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging

    PubMed Central

    Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

    2016-01-01

    Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441

  8. Sampling stratospheric aerosols with impactors

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.

    1989-01-01

    Derivation of statistically significant size distributions from impactor samples of rarefield stratospheric aerosols imposes difficult sampling constraints on collector design. It is shown that it is necessary to design impactors of different size for each range of aerosol size collected so as to obtain acceptable levels of uncertainty with a reasonable amount of data reduction.

  9. Lessons From the Largest Historic Floods Documented by the U.S. Geological Survey

    NASA Astrophysics Data System (ADS)

    Costa, J. E.

    2003-12-01

    A recent controversy over the flood risk downstream from a USGS streamgaging station in southern California that recorded a large debris flow led to the decision to closely examine a sample of the largest floods documented in the US. Twenty-nine floods that define the envelope curve of the largest rainfall-runoff floods were examined in detail, including field visits. These floods have a profound impact on local, regional, and national interpretations of potential peak discharges and flood risk. These 29 floods occured throughout the US from the northern Chesapeake Bay in Maryland to Kauai, Hawaii, and over time from 1935-1978. Methods used to compute peak discharges were slope-area (21/29), culvert computations (2/29), measurements lost or not available for study (2/29), bridge contraction, culvert flow, and flow over road (1/29), rating curve extension (1/29), current meter measurement (1/29), and rating curve and current meter measurement (1/29). While field methods and tools have improved significantly over the last 70 years (e.g. total stations, GPS, GIS, hydroacoustics, digital plotters and computer programs like SAC and CAP), the primary methods of hydraulic analysis for indirect measurements of outstanding floods has not changed: today flow is still assumed to be 1-D and gradually varied. Unsteady or multi-dimensional flow models are rarely if ever used to determine peak discharges. Problems identified in this sample of 29 floods include debris flows misidentified as water floods, small drainage areas determined from small-scale maps and mislocated sites, high-water marks set by transient hydraulic phenomena, possibility of disconnected flow surfaces, scour assumptions in sand channels, poor site selection, incorrect approach angle for road overflow, and missing or lost records. Each published flood magnitude was checked by applying modern computer models with original field data, or by re-calculating computations. Four of 29 floods in this sample were found to have errors resulting in a change of the peak discharge of more than 10%.

  10. Long-term application of computer-based pleoptics in home therapy: selected results of a prospective multicenter study.

    PubMed

    Kämpf, Uwe; Shamshinova, Angelika; Kaschtschenko, Tamara; Mascolus, Wilfried; Pillunat, Lutz; Haase, Wolfgang

    2008-01-01

    The paper presents selected results of a prospective multicenter study. The reported study was aimed at the evaluation of a software-based stimulation method of computer training applied in addition to occlusion as a complementary treatment for therapy-resistant cases of amblyopia. The stimulus was a drifting sinusoidal grating of a spatial frequency of 0.3 cyc/deg and a temporal frequency of 1 cyc/sec, reciprocally coordinated with each other to a drift of 0.33 deg/sec. This pattern was implemented as a background stimulus into simple computer games to bind attention by sensory-motor coordination tasks. According to an earlier proposed hypothesis, the stimulation aims at the provocation of stimulus-induced phase-coupling in order to contribute to the refreshment of synchronization and coordination processes in the visual transmission channels. To assess the outcome of the therapy, we studied the development of the visual acuity during a period of 6 months. Our cooperating partners of this prospective multicenter study were strabologic departments in ophthalmic clinics and private practices as well. For the issue of therapy control, a partial sample of 55 patients from an overall sample of 198 patients was selected, according to the criterion of strong therapy resistance. The visual acuity was increased about two logarithmic steps by an occlusion combined with computer training in addition to the earlier obtained gain of the same amount by occlusion alone. Recalculated relatively to the duration of the therapy periods, the computer training combined with occlusion was found to be about twice as effective as the preceding occlusion alone. The results of combined computer training and occlusion show an additional increase of the same amount as the preceding occlusion alone, which yielded at its end no further advantage to the development of visual acuity in the selected sample of our 55 therapy-resistant patients. In a concluding theoretical note, a preliminary hypothesis about the neuronal mechanisms of the stimulus-induced treatment effect is discussed.

  11. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  12. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  13. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  14. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    PubMed

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Hierarchical modeling of cluster size in wildlife surveys

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  16. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  17. Characterization of a new transmission detector for patient individualized online plan verification and its influence on 6MV X-ray beam characteristics.

    PubMed

    Thoelking, Johannes; Sekar, Yuvaraj; Fleckenstein, Jens; Lohr, Frank; Wenz, Frederik; Wertz, Hansjoerg

    2016-09-01

    Online verification and 3D dose reconstruction on daily patient anatomy have the potential to improve treatment delivery, accuracy and safety. One possible implementation is to recalculate dose based on online fluence measurements with a transmission detector (TD) attached to the linac. This study provides a detailed analysis of the influence of a new TD on treatment beam characteristics. The influence of the new TD on surface dose was evaluated by measurements with an Advanced Markus Chamber (Adv-MC) in the build-up region. Based on Monte Carlo simulations, correction factors were determined to scale down the over-response of the Adv-MC close to the surface. To analyze the effects beyond dmax percentage depth dose (PDD), lateral profiles and transmission measurements were performed. All measurements were carried out for various field sizes and different SSDs. Additionally, 5 IMRT-plans (head & neck, prostate, thorax) and 2 manually created test cases (3×3cm(2) fields with different dose levels, sweeping gap) were measured to investigate the influence of the TD on clinical treatment plans. To investigate the performance of the TD, dose linearity as well as dose rate dependency measurements were performed. With the TD inside the beam an increase in surface dose was observed depending on SSD and field size (maximum of +11%, SSD = 80cm, field size = 30×30cm(2)). Beyond dmax the influence of the TD on PDDs was below 1%. The measurements showed that the transmission factor depends slightly on the field size (0.893-0.921 for 5×5cm(2) to 30×30cm(2)). However, the evaluation of clinical IMRT-plans measured with and without the TD showed good agreement after using a single transmission factor (γ(2%/2mm) > 97%, δ±3% >95%). Furthermore, the response of TD was found to be linear and dose rate independent (maximum difference <0.5% compared to reference measurements). When placed in the path of the beam, the TD introduced a slight, clinically acceptable increase of the skin dose even for larger field sizes and smaller SSDs and the influence of the detector on the dose beyond dmax as well as on clinical IMRT-plans was negligible. Since there was no dose rate dependency and the response was linear, the device is therefore suitable for clinical use. Only its absorption has to be compensated during treatment planning, either by the use of a single transmission factor or by including the TD in the incident beam model. Copyright © 2015. Published by Elsevier GmbH.

  18. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  19. The Relationship between National-Level Carbon Dioxide Emissions and Population Size: An Assessment of Regional and Temporal Variation, 1960–2005

    PubMed Central

    Jorgenson, Andrew K.; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region. PMID:23437323

  20. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  1. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  2. Pd-catalysts for DFAFC prepared by magnetron sputtering

    NASA Astrophysics Data System (ADS)

    Bieloshapka, I.; Jiricek, P.; Vorokhta, M.; Tomsik, E.; Rednyk, A.; Perekrestov, R.; Jurek, K.; Ukraintsev, E.; Hruska, K.; Romanyuk, O.; Lesiak, B.

    2017-10-01

    Samples of a palladium catalyst for direct formic acid fuel cell (DFAFC) applications were prepared on the Elat® carbon cloth by magnetron sputtering. The quantity of Pd was equal to 3.6, 120 and 720 μg/cm2. The samples were tested in a fuel cell for electro-oxidation of formic acid, and were characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). The XPS measurements revealed a high contribution of PdCx phase formed at the Pd/Elat® surface interface, with carbon concentration in PdCx from x = 9.9-14.6 at.%, resulting from the C substrate and CO residual gases. Oxygen groups, e.g. hydroxyl (-OH), carbonyl (Cdbnd O) and carboxyl (COOH), resulted from the synthesis conditions due to the presence of residual gases, electro-oxidation during the reaction and oxidation in the atmosphere. Because of the formation of CO and CO2 on the catalysts during the reaction, or because of poisoning by impurities containing the -CH3 group, together with the risk of Pd losses due to dissolution in formic acid, there was a negative effect of catalyst degradation on the active area surface. The effect of different loadings of Pd layers led to increasing catalyst efficiency. Current-voltage curves showed that different amounts of catalyst did not increase the DFAFC power to a great extent. One reason for this was the catalyst structure formed on the carbon cloth. AFM and SEM measurements showed a layer-by-layer growth with no significant variations in morphology. The results for electric power recalculated for the Pd loading per 1 mg of catalyst layers in comparison to carbon substrates decorated by Pd nanoparticles showed that there is potential for applying anodes for formic acid fuel cells prepared by magnetron sputtering.

  3. Treatment of hyperthyroidism with radioiodine targeted activity: A comparison between two dosimetric methods.

    PubMed

    Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio

    2016-06-01

    Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  5. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  6. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  7. Mass spectra features of biomass burning boiler and coal burning boiler emitted particles by single particle aerosol mass spectrometer.

    PubMed

    Xu, Jiao; Li, Mei; Shi, Guoliang; Wang, Haiting; Ma, Xian; Wu, Jianhui; Shi, Xurong; Feng, Yinchang

    2017-11-15

    In this study, single particle mass spectra signatures of both coal burning boiler and biomass burning boiler emitted particles were studied. Particle samples were suspended in clean Resuspension Chamber, and analyzed by ELPI and SPAMS simultaneously. The size distribution of BBB (biomass burning boiler sample) and CBB (coal burning boiler sample) are different, as BBB peaks at smaller size, and CBB peaks at larger size. Mass spectra signatures of two samples were studied by analyzing the average mass spectrum of each particle cluster extracted by ART-2a in different size ranges. In conclusion, BBB sample mostly consists of OC and EC containing particles, and a small fraction of K-rich particles in the size range of 0.2-0.5μm. In 0.5-1.0μm, BBB sample consists of EC, OC, K-rich and Al_Silicate containing particles; CBB sample consists of EC, ECOC containing particles, while Al_Silicate (including Al_Ca_Ti_Silicate, Al_Ti_Silicate, Al_Silicate) containing particles got higher fractions as size increase. The similarity of single particle mass spectrum signatures between two samples were studied by analyzing the dot product, results indicated that part of the single particle mass spectra of two samples in the same size range are similar, which bring challenge to the future source apportionment activity by using single particle aerosol mass spectrometer. Results of this study will provide physicochemical information of important sources which contribute to particle pollution, and will support source apportionment activities. Copyright © 2017. Published by Elsevier B.V.

  8. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    NASA Astrophysics Data System (ADS)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM samples. Some of the day to night difference may have been caused also by differing wind directions transporting air masses from different emission sources during the day and the night. The present findings indicate the important role of the local particle sources and atmospheric processes on the health related toxicological properties of the PM. The varying toxicological responses evoked by the PM samples showed the importance of examining various particle sizes. Especially the detected considerable toxicological activity by PM0.2 size range suggests they're attributable to combustion sources, new particle formation and atmospheric processes.

  9. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  10. Comparing REACH Chemical Safety Assessment information with practice-a case-study of polymethylmethacrylate (PMMA) in floor coating in The Netherlands.

    PubMed

    Spee, Ton; Huizer, Daan

    2017-10-01

    On June 1st, 2007 the European regulation on Registration, Evaluation and Restriction of Chemical substances (REACH) came into force. Aim of the regulation is safe use of chemicals for humans and for the environment. The core element of REACH is chemical safety assessment of chemicals and communication of health and safety hazards and risk management measures throughout the supply chain. Extended Safety Data Sheets (Ext-SDS) are the primary carriers of health and safety information. The aim of our project was to find out whether the actual exposure to methyl methacrylate (MMA) during the application of polymethylmethacrylate (PMMA) in floor coatings as assessed in the chemical safety assessment, reflect the exposure situations as observed in the Dutch building practice. Use of PMMA flooring and typical exposure situations during application were discussed with twelve representatives of floor laying companies. Representative situations for exposure measurements were designated on the basis of this inventory. Exposure to MMA was measured in the breathing zone of the workers at four construction sites, 14 full shift samples and 14 task based samples were taken by personal air sampling. The task-based samples were compared with estimates from the Targeted Risk Assessment Tool (v3.1) of the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC-TRA) as supplied in the safety assessment from the manufacturer. For task-based measurements, in 12 out of 14 (86%) air samples measured exposure was higher than estimated exposure. Recalculation with a lower ventilation rate (50% instead of 80%) together with a higher temperature during mixing (40°C instead of 20°C) in comparison with the CSR, reduced the number of underestimated exposures to 10 (71%) samples. Estimation with the EMKG-EXPO-Tool resulted in unsafe exposure situations for all scenarios, which is in accordance with the measurement outcomes. In indoor situations, 5 out of 8 full shift exposures (62%) to MMA were higher than the Dutch occupational exposure limit of 205mg/m 3 (8h TWA), which equals the DNEL. For semi-enclosed situations this was 1 out of 6 (17%). Exposures varied from 31 to 367mg/m 3 . The results emphasize that ECETOC-TRA exposure estimates in poorly controlled situations need better underpinning. Copyright © 2017 Elsevier GmbH. All rights reserved.

  11. Damage Accumulation in Silica Glass Nanofibers.

    PubMed

    Bonfanti, Silvia; Ferrero, Ezequiel E; Sellerio, Alessandro L; Guerra, Roberto; Zapperi, Stefano

    2018-06-06

    The origin of the brittle-to-ductile transition, experimentally observed in amorphous silica nanofibers as the sample size is reduced, is still debated. Here we investigate the issue by extensive molecular dynamics simulations at low and room temperatures for a broad range of sample sizes, with open and periodic boundary conditions. Our results show that small sample-size enhanced ductility is primarily due to diffuse damage accumulation, that for larger samples leads to brittle catastrophic failure. Surface effects such as boundary fluidization contribute to ductility at room temperature by promoting necking, but are not the main driver of the transition. Our results suggest that the experimentally observed size-induced ductility of silica nanofibers is a manifestation of finite-size criticality, as expected in general for quasi-brittle disordered networks.

  12. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  13. Using the Student's "t"-Test with Extremely Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  14. 40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...

  15. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  16. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  17. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  18. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  19. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  20. Sample size of the reference sample in a case-augmented study.

    PubMed

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Influence of sampling window size and orientation on parafoveal cone packing density

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe

    2013-01-01

    We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995

  2. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  3. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  4. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    PubMed

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  6. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Treesearch

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  7. KEY COMPARISON Update of the BIPM.RI(II)-K1.Tc-99m comparison of activity measurements for the radionuclide 99mTc to include new results for the LNE-LNHB and the NPL

    NASA Astrophysics Data System (ADS)

    Michotte, C.; Courte, S.; Ratel, G.; Moune, M.; Johansson, L.; Keightley, J.

    2010-01-01

    In 2007 and 2008 respectively, the Laboratoire national de métrologie et d'essais-Laboratoire national Henri Becquerel (LNE-LNHB), France and the National Physical Laboratory (NPL), UK, submitted ampoules with between 10 MBq and 130 MBq activity of 99mTc to the International Reference System (SIR), to update their results in the BIPM.RI(II)-K1.Tc-99m comparison. Together with the four other national metrology institutes (NMI) that are participants, thirteen samples have been submitted since 1983. The key comparison reference value (KCRV) has been recalculated to include the latest primary results of the PTB and the LNE-LNHB as this makes the evaluation more robust. The degrees of equivalence between each equivalent activity measured in the SIR are given in the form of a matrix for all six NMIs. A graphical presentation is also given. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI Section II, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  8. Monogenetic origin of Ubehebe Crater maar volcano, Death Valley, California: Paleomagnetic and stratigraphic evidence

    NASA Astrophysics Data System (ADS)

    Champion, Duane E.; Cyr, Andy; Fierstein, Judy; Hildreth, Wes

    2018-04-01

    Paleomagnetic data for samples collected from outcrops of basaltic spatter at the Ubehebe Crater cluster, Death Valley National Park, California, record a single direction of remanent magnetization indicating that these materials were emplaced during a short duration, monogenetic eruption sequence 2100 years ago. This conclusion is supported by geochemical data encompassing a narrow range of oxide variation, by detailed stratigraphic studies of conformable phreatomagmatic tephra deposits showing no evidence of erosion between layers, by draping of sharp rimmed craters by later tephra falls, and by oxidation of later tephra layers by the remaining heat of earlier spatter. This model is also supported through a reinterpretation and recalculation of the published 10Be age results (Sasnett et al., 2012) from an innovative and bold exposure-age study on very young materials. Their conclusion of multiple and protracted eruptions at Ubehebe Crater cluster is here modified through the understanding that some of their quartz-bearing clasts inherited 10Be from previous exposure on the fan surface (too old), and that other clasts were only exposed at the surface by wind and/or water erosion centuries after their eruption (too young). Ubehebe Crater cluster is a well preserved example of young monogenetic maar type volcanism protected within a National Park, and it represents neither a protracted eruption sequence as previously thought, nor a continuing volcanic hazard near its location.

  9. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Suitability of river delta sediment as proppant, Missouri and Niobrara Rivers, Nebraska and South Dakota, 2015

    USGS Publications Warehouse

    Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine

    2017-11-16

    Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.

  11. Sampling the structure and chemical order in assemblies of ferromagnetic nanoparticles by nuclear magnetic resonance

    PubMed Central

    Liu, Yuefeng; Luo, Jingjie; Shin, Yooleemi; Moldovan, Simona; Ersen, Ovidiu; Hébraud, Anne; Schlatter, Guy; Pham-Huu, Cuong; Meny, Christian

    2016-01-01

    Assemblies of nanoparticles are studied in many research fields from physics to medicine. However, as it is often difficult to produce mono-dispersed particles, investigating the key parameters enhancing their efficiency is blurred by wide size distributions. Indeed, near-field methods analyse a part of the sample that might not be representative of the full size distribution and macroscopic methods give average information including all particle sizes. Here, we introduce temperature differential ferromagnetic nuclear resonance spectra that allow sampling the crystallographic structure, the chemical composition and the chemical order of non-interacting ferromagnetic nanoparticles for specific size ranges within their size distribution. The method is applied to cobalt nanoparticles for catalysis and allows extracting the size effect from the crystallographic structure effect on their catalytic activity. It also allows sampling of the chemical composition and chemical order within the size distribution of alloyed nanoparticles and can thus be useful in many research fields. PMID:27156575

  12. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    NASA Astrophysics Data System (ADS)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  13. Particle size analysis of sediments, soils and related particulate materials for forensic purposes using laser granulometry.

    PubMed

    Pye, Kenneth; Blott, Simon J

    2004-08-11

    Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.

  14. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    PubMed

    Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin

    2014-01-01

    A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

  16. Single and simultaneous binary mergers in Wright-Fisher genealogies.

    PubMed

    Melfi, Andrew; Viswanath, Divakar

    2018-05-01

    The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.

    ERIC Educational Resources Information Center

    Parshall, Cynthia G.; Kromrey, Jeffrey D.

    1996-01-01

    Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)

  18. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Calculating Sample Size for NYTD Follow-Up Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD...

  19. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  20. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  1. Estimating population size with correlated sampling unit estimates

    Treesearch

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  2. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  3. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    NASA Astrophysics Data System (ADS)

    Grzeszczuk, A.; Kowalski, S.

    2015-04-01

    Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  4. GNAP (Graphic Normative Analysis Program)

    USGS Publications Warehouse

    Bowen, Roger W.; Odell, John

    1979-01-01

    A user-oriented command language is developed to provide direct control over the computation and output of the standard CIPW norm. A user-supplied input format for the oxide values may be given or a standard CIPW Rock Analysis format may be used. Once the oxide values have been read by the computer, these values may be manipulated by the user and the 'norm' recalculated on the basis of the manipulated or 'adjusted' values. Additional output capabilities include tabular listing of computed values, summary listings suitable for publication, x-y plots, and ternary diagrams. As many as 20 rock analysis cards may be processed as a group. Any number of such groups may be processed in any one computer run.

  5. DART, a platform for the creation and registration of cone beam digital tomosynthesis datasets.

    PubMed

    Sarkar, Vikren; Shi, Chengyu; Papanikolaou, Niko

    2011-04-01

    Digital tomosynthesis is an imaging modality that allows for tomographic reconstructions using only a fraction of the images needed for CT reconstruction. Since it offers the advantages of tomographic images with a smaller imaging dose delivered to the patient, the technique offers much promise for use in patient positioning prior to radiation delivery. This paper describes a software environment developed to help in the creation of digital tomosynthesis image sets from digital portal images using three different reconstruction algorithms. The software then allows for use of the tomograms for patient positioning or for dose recalculation if shifts are not applied, possibly as part of an adaptive radiotherapy regimen.

  6. The method of the gas-dynamic centrifugal compressor stage characteristics recalculation for variable rotor rotational speeds and the rotation angle of inlet guide vanes blades if the kinematic and dynamic similitude conditions are not met

    NASA Astrophysics Data System (ADS)

    Vanyashov, A. D.; Karabanova, V. V.

    2017-08-01

    A mathematical description of the method for obtaining gas-dynamic characteristics of a centrifugal compressor stage is proposed, taking into account the control action by varying the rotor speed and the angle of rotation of the guide vanes relative to the "basic" characteristic, if the kinematic and dynamic similitude conditions are not met. The formulas of the correction terms for the non-dimensional coefficients of specific work, consumption and efficiency are obtained. A comparative analysis of the calculated gas-dynamic characteristics of a high-pressure centrifugal stage with experimental data is performed.

  7. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  8. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Influence of pore size distributions on decomposition of maize leaf residue: evidence from X-ray computed micro-tomography

    NASA Astrophysics Data System (ADS)

    Negassa, Wakene; Guber, Andrey; Kravchenko, Alexandra; Rivers, Mark

    2014-05-01

    Soil's potential to sequester carbon (C) depends not only on quality and quantity of organic inputs to soil but also on the residence time of the applied organic inputs within the soil. Soil pore structure is one of the main factors that influence residence time of soil organic matter by controlling gas exchange, soil moisture and microbial activities, thereby soil C sequestration capacity. Previous attempts to investigate the fate of organic inputs added to soil did not allow examining their decomposition in situ; the drawback that can now be remediated by application of X-ray computed micro-tomography (µ-CT). The non-destructive and non-invasive nature of µ-CT gives an opportunity to investigate the effect of soil pore size distributions on decomposition of plant residues at a new quantitative level. The objective of this study is to examine the influence of pore size distributions on the decomposition of plant residue added to soil. Samples with contrasting pore size distributions were created using aggregate fractions of five different sizes (<0.05, 0.05-0.1, 0.10-05, 0.5-1.0 and 1.0-2.0 mm). Weighted average pore diameters ranged from 10 µm (<0.05 mm fraction) to 104 µm (1-2 mm fraction), while maximum pore diameter were in a range from 29 µm (<0.05 mm fraction) to 568 µm (1-2 mm fraction) in the created soil samples. Dried pieces of maize leaves 2.5 mg in size (equivalent to 1.71 mg C g-1 soil) were added to half of the studied samples. Samples with and without maize leaves were incubated for 120 days. CO2 emission from the samples was measured at regular time intervals. In order to ensure that the observed differences are due to differences in pore structure and not due to differences in inherent properties of the studied aggregate fractions, we repeated the whole experiment using soil from the same aggregate size fractions but ground to <0.05 mm size. Five to six replicated samples were used for intact and ground samples of all sizes with and without leaves. Two replications of the intact aggregate fractions of all sizes with leaves were subjected to µ-CT scanning before and after incubation, whereas all the remaining replications of both intact and ground aggregate fractions of <0.05, 0.05-0.1, and 1.0-2.0 mm sizes with leaves were scanned with µ-CT after the incubation. The µ-CT image showed that approximately 80% of the leaves in the intact samples of large aggregate fractions (0.5-1.0 and 1.0-2.0 mm) was decomposed during the incubation, while only 50-60% of the leaves were decomposed in the intact samples of smaller sized fractions. Even lower percent of leaves (40-50%) was decomposed in the ground samples, with very similar leaf decomposition observed in all ground samples regardless of the aggregate fraction size. Consistent with µ-CT results, the proportion of decomposed leaf estimated with the conventional mass loss method was 48% and 60% for the <0.05 mm and 1.0-2.0 mm soil size fractions of intact aggregates, and 40-50% in ground samples, respectively. The results of the incubation experiment demonstrated that, while greater C mineralization was observed in samples of all size fractions amended with leaf, the effect of leaf presence was most pronounced in the smaller aggregate fractions (0.05-0.1 mm and 0.05 mm) of intact aggregates. The results of the present study unequivocally demonstrate that differences in pore size distributions have a major effect on the decomposition of plant residues added to soil. Moreover, in presence of plant residues, differences in pore size distributions appear to also influence the rates of decomposition of the intrinsic soil organic material.

  10. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  11. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis.

    PubMed

    Wildt, Signe; Krag, Aleksander; Gluud, Liselotte

    2011-01-01

    Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.

  13. Incidence of testicular cancer and occupation among Swedish men gainfully employed in 1970.

    PubMed

    Pollán, M; Gustavsson, P; Cano, M I

    2001-11-01

    To estimate occupation-specific risk of seminomas and nonseminoma subtypes of testicular cancer among Swedish men gainfully employed in 1970 over the period 1971-1989. Age-period standardized incidence ratios were computed in a dataset linking cancer diagnoses from the Swedish national cancer register to occupational and demographical data obtained in the census in 1970. Log-linear Poisson models were fitted, allowing for geographical area and town size. Taking occupational sector as a proxy for socioeconomic status, occupational risks were recalculated using intra-sector analyses, where the reference group comprised other occupations in the same sector only. Risk estimators per occupation were also computed for men reporting the same occupation in 1960 and 1970, a more specifically exposed group. Seminomas and nonseminomas showed a substantial geographical variation. The association between germ-cell testicular tumors and high socioeconomic groups was found mainly for nonseminomas. Positive associations with particular occupations were more evident for seminomas, for which railway stationmasters, metal annealers and temperers, precision toolmakers, watchmakers, construction smiths, and typographers and lithographers exhibited a risk excess. Concrete and construction worker was the only occupation consistently associated with nonseminomas. Among the many occupations studied, our results corroborate the previously reported increased risk among metal workers, specifically related with seminomatous tumors in this study. Our results confirm the geographical and socioeconomical differences in the incidence of testicular tumors. These factors should be accounted for in occupational studies. The different pattern of occupations related with seminomas and nonseminomas support the need to study these tumors separately.

  14. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  15. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  16. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  17. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  18. Establishment of replacement batches for heparin low-molecular-mass for calibration CRS, and the International Standard Low Molecular Weight Heparin for Calibration.

    PubMed

    Mulloy, B; Heath, A; Behr-Gross, M-E

    2007-12-01

    An international collaborative study involving fourteen laboratories has taken place, organised by the European Directorate for the Quality of Medicines & HealthCare (EDQM) with National Institute for Biological Standards & Control (NIBSC) (in its capacity as a World Health Organisation (WHO) Laboratory for Biological Standardisation) to provide supporting data for the establishment of replacement batches of Heparin Low-Molecular-Mass (LMM) for Calibration Chemical Reference Substance (CRS), and of the International Reference Reagent (IRR) Low Molecular Weight Heparin for Molecular Weight Calibration. A batch of low-molecular-mass heparin was donated to the organisers and candidate preparations of freeze-dried heparin were produced at NIBSC and EDQM. The establishment study was organised in two phases: a prequalification (phase 1, performed in 3 laboratories in 2005) followed by an international collaborative study (phase 2). In phase 2, started in March 2006, molecular mass parameters were determined for seven different LMM heparin samples using the current CRS batch and two batches of candidate replacement material with a defined number average relative molecular mass (Mn) of 3,700, determined in phase 1. The values calculated using the candidates as standard were systematically different from values calculated using the current batch with its assigned number-average molecular mass (Mna) of 3,700. Using raw data supplied by participants, molecular mass parameters were recalculated using the candidates as standard with values for Mna of 3,800 and 3,900. Values for these parameters agreed more closely with those calculated using the current batch supporting the fact that the candidates, though similar to batch 1 in view of the production processes used, differ slightly in terms of molecular mass distribution. Therefore establishment of the candidates was recommended with an assigned Mna value of 3,800 that is both consistent with phase 1 results and guarantees continuity with the current CRS batch. In phase 2, participants also determined molecular weight parameters for the seven different LMM heparin samples using both the 1st IRR (90/686) and its Broad Standard Table and the candidate World Health Organization (WHO) 2nd International Standard (05/112) (2nd IS) using a Broad Standard Table established in phase 1. Mean molecular weights calculated using 2nd IS were slightly higher than with 1st IRR, and participants in the study indicated that this systematic difference precluded establishment of 2nd IS with the table supplied. A replacement Broad Standard Table has been devised on the basis of the central recalculations of raw data supplied by participants; this table gives improved agreement between values derived using the 1st IRR and the candidate 2nd IS. On the basis of this study a recommendation was made for the establishment of 2nd IS and its proposed Broad Standard Table as a replacement for the 1st International Reference Reagent Low Molecular Weight Heparin for Molecular Weight Calibration. Unlike the 1st IRR however, the candidate material 2nd IS is not suitable for use with the method of Nielsen. The candidate materials were established as heparin low-molecular-mass for calibration batches 2 and 3 by the Ph. Eur. Commission in March 2007 and as 2nd IS low-molecular-weight heparin for molecular weight calibration (05/112) by the Expert Committee on Biological Standardization in November 2007.

  19. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  20. Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup

    NASA Astrophysics Data System (ADS)

    Braun, A.; Neukum, C.; Azzam, R.

    2011-12-01

    The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.

  1. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  2. Approximate Sample Size Formulas for Testing Group Mean Differences when Variances Are Unequal in One-Way ANOVA

    ERIC Educational Resources Information Center

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2008-01-01

    This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…

  3. Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    In'nami, Yo; Koizumi, Rie

    2013-01-01

    The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…

  4. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  5. Extraction of citral oil from lemongrass (Cymbopogon Citratus) by steam-water distillation technique

    NASA Astrophysics Data System (ADS)

    Alam, P. N.; Husin, H.; Asnawi, T. M.; Adisalamun

    2018-04-01

    In Indonesia, production of citral oil from lemon grass (Cymbopogon Cytratus) is done by a traditional technique whereby a low yield results. To improve the yield, an appropriate extraction technology is required. In this research, a steam-water distillation technique was applied to extract the essential oil from the lemongrass. The effects of sample particle size and bed volume on yield and quality of citral oil produced were investigated. The drying and refining time of 2 hours were used as fixed variables. This research results that minimum citral oil yield of 0.53% was obtained on sample particle size of 3 cm and bed volume of 80%, whereas the maximum yield of 1.95% on sample particle size of 15 cm and bed volume of 40%. The lowest specific gravity of 0.80 and the highest specific gravity of 0.905 were obtained on sample particle size of 8 cm with bed volume of 80% and particle size of 12 cm with bed volume of 70%, respectively. The lowest refractive index of 1.480 and the highest refractive index of 1.495 were obtained on sample particle size of 8 cm with bed volume of 70% and sample particle size of 15 cm with bed volume of 40%, respectively. The solubility of the produced citral oil in alcohol was 70% in ratio of 1:1, and the citral oil concentration obtained was around 79%.

  6. Scanning fiber angle-resolved low coherence interferometry

    PubMed Central

    Zhu, Yizheng; Terry, Neil G.; Wax, Adam

    2010-01-01

    We present a fiber-optic probe for Fourier-domain angle-resolved low coherence interferometry for the determination of depth-resolved scatterer size. The probe employs a scanning single-mode fiber to collect the angular scattering distribution of the sample, which is analyzed using the Mie theory to obtain the average size of the scatterers. Depth sectioning is achieved with low coherence Mach–Zehnder interferometry. In the sample arm of the interferometer, a fixed fiber illuminates the sample through an imaging lens and a collection fiber samples the backscattered angular distribution by scanning across the Fourier plane image of the sample. We characterize the optical performance of the probe and demonstrate the ability to execute depth-resolved sizing with subwavelength accuracy by using a double-layer phantom containing two sizes of polystyrene microspheres. PMID:19838271

  7. Synthesis And Characterization Of Reduced Size Ferrite Reinforced Polymer Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borah, Subasit; Bhattacharyya, Nidhi S.

    2008-04-24

    Small sized Co{sub 1-x}Ni{sub x}Fe{sub 2}O{sub 4} ferrite particles are synthesized by chemical route. The precursor materials are annealed at 400, 600 and 800 C. The crystallographic structure and phases of the samples are characterized by X-ray diffraction (XRD). The annealed ferrite samples crystallized into cubic spinel structure. Transmission Electron Microscopy (TEM) micrographs show that the average particle size of the samples are <20 nm. Particulate magneto-polymer composite materials are fabricated by reinforcing low density polyethylene (LDPE) matrix with the ferrite samples. The B-H loop study conducted at 10 kHz on the toroid shaped composite samples shows reduction in magneticmore » losses with decrease in size of the filler sample. Magnetic losses are detrimental for applications of ferrite at high powers. The reduction in magnetic loss shows a possible application of Co-Ni ferrites at high microwave power levels.« less

  8. Degradation resistance of 3Y-TZP ceramics sintered using spark plasma sintering

    NASA Astrophysics Data System (ADS)

    Chintapalli, R.; Marro, F. G.; Valle, J. A.; Yan, H.; Reece, M. J.; Anglada, M.

    2009-09-01

    Commercially available tetragonal zirconia powder doped with 3 mol% of yttria has been sintered using spark plasma sintering (SPS) and has been investigated for its resistance to hydrothermal degradation. Samples were sintered at 1100, 1150, 1175 and 1600 °C at constant pressure of 100 MPa and soaking for 5 minutes, and the grain sizes obtained were 65, 90, 120 and 800 nm, respectively. Samples sintered conventionally with a grain size of 300 nm were also compared with samples sintered using SPS. Finely polished samples were subjected to artificial degradation at 131 °C for 60 hours in vapour in auto clave under a pressure of 2 bars. The XRD studies show no phase transformation in samples with low density and small grain size (<200 nm), but significant phase transformation is seen in dense samples with larger grain size (>300 nm). Results are discussed in terms of present theories of hydrothermal degradation.

  9. Qualitative Meta-Analysis on the Hospital Task: Implications for Research

    ERIC Educational Resources Information Center

    Noll, Jennifer; Sharma, Sashi

    2014-01-01

    The "law of large numbers" indicates that as sample size increases, sample statistics become less variable and more closely estimate their corresponding population parameters. Different research studies investigating how people consider sample size when evaluating the reliability of a sample statistic have found a wide range of…

  10. Sampling strategies for estimating brook trout effective population size

    Treesearch

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  11. 40 CFR 90.706 - Engine sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... = emission test result for an individual engine. x = mean of emission test results of the actual sample. FEL... test with the last test result from the previous model year and then calculate the required sample size.... Test results used to calculate the variables in the following Sample Size Equation must be final...

  12. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  13. Sample size for post-marketing safety studies based on historical controls.

    PubMed

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  14. Synthesis and characterization of nanocrystalline mesoporous zirconia using supercritical drying.

    PubMed

    Tyagi, Beena; Sidhpuria, Kalpesh; Shaik, Basha; Jasra, Raksh Vir

    2006-06-01

    Synthesis of nano-crystalline zirconia aerogel was done by sol-gel technique and supercritical drying using n-propanol solvent at and above supercritical temperature (235-280 degrees C) and pressure (48-52 bar) of n-propanol. Zirconia xerogel samples have also been prepared by conventional thermal drying method to compare with the super critically dried samples. Crystalline phase, crystallite size, surface area, pore volume, and pore size distribution were determined for all the samples in detail to understand the effect of gel drying methods on these properties. Supercritical drying of zirconia gel was observed to give thermally stable, nano-crystalline, tetragonal zirconia aerogels having high specific surface area and porosity with narrow and uniform pore size distribution as compared to thermally dried zirconia. With supercritical drying, zirconia samples show the formation of only mesopores whereas in thermally dried samples, substantial amount of micropores are observed along with mesopores. The samples prepared using supercritical drying yield nano-crystalline zirconia with smaller crystallite size (4-6 nm) as compared to higher crystallite size (13-20 nm) observed with thermally dried zirconia.

  15. Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size

    USGS Publications Warehouse

    Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.

    2007-01-01

    For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the laboratory for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.

  16. Underwater Microscope for Measuring Spatial and Temporal Changes in Bed-Sediment Grain Size

    USGS Publications Warehouse

    Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.

    2006-01-01

    For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the lab for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in-situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.

  17. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    PubMed

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  18. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  19. Influence of size-fractioning techniques on concentrations of selected trace metals in bottom materials from two streams in northeastern Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Helsel, Dennis R.

    1986-01-01

    Identical stream-bottom material samples, when fractioned to the same size by different techniques, may contain significantly different trace-metal concentrations. Precision of techniques also may differ, which could affect the ability to discriminate between size-fractioned bottom-material samples having different metal concentrations. Bottom-material samples fractioned to less than 0.020 millimeters by means of three common techniques (air elutriation, sieving, and settling) were analyzed for six trace metals to determine whether the technique used to obtain the desired particle-size fraction affects the ability to discriminate between bottom materials having different trace-metal concentrations. In addition, this study attempts to assess whether median trace-metal concentrations in size-fractioned bottom materials of identical origin differ depending on the size-fractioning technique used. Finally, this study evaluates the efficiency of the three size-fractioning techniques in terms of time, expense, and effort involved. Bottom-material samples were collected at two sites in northeastern Ohio: One is located in an undeveloped forested basin, and the other is located in a basin having a mixture of industrial and surface-mining land uses. The sites were selected for their close physical proximity, similar contributing drainage areas, and the likelihood that trace-metal concentrations in the bottom materials would be significantly different. Statistically significant differences in the concentrations of trace metals were detected between bottom-material samples collected at the two sites when the samples had been size-fractioned by means of air elutriation or sieving. Statistical analyses of samples that had been size fractioned by settling in native water were not measurably different in any of the six trace metals analyzed. Results of multiple comparison tests suggest that differences related to size-fractioning technique were evident in median copper, lead, and iron concentrations. Technique-related differences in copper concentrations most likely resulted from contamination of air-elutriated samples by a feed tip on the elutriator apparatus. No technique-related differences were observed in chromium, manganese, or zinc concentrations. Although air elutriation was the most expensive sizefractioning technique investigated, samples fractioned by this technique appeared to provide a superior level of discrimination between metal concentrations present in the bottom materials of the two sites. Sieving was an adequate lower-cost but more laborintensive alternative.

  20. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

Top